Feb 18 19:17:19 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 19:17:19 crc restorecon[4685]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.756875 4942 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763251 4942 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763297 4942 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763302 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763308 4942 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763314 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763320 4942 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763325 4942 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763330 4942 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763335 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763340 4942 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763345 4942 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763350 4942 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763355 4942 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763359 4942 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763363 4942 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763367 4942 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763371 4942 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763376 4942 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763379 4942 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763384 4942 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763389 4942 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763404 4942 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763408 4942 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763412 4942 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763416 4942 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763420 4942 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763424 4942 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763429 4942 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763434 4942 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763438 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763442 4942 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763445 4942 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763451 4942 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763455 4942 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763459 4942 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763463 4942 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763466 4942 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763470 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763476 4942 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763480 4942 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763483 4942 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763487 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763490 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763494 4942 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763497 4942 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763501 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763505 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763510 4942 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763515 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763528 4942 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763532 4942 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763535 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763539 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763545 4942 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763549 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763553 4942 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763556 4942 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763559 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763563 4942 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763567 4942 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763570 4942 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763573 4942 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763577 4942 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763580 4942 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763584 4942 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763587 4942 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763590 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763594 4942 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763597 4942 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763600 4942 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763605 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763719 4942 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763732 4942 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763743 4942 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763750 4942 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763775 4942 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763782 4942 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763792 4942 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763801 4942 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763807 4942 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763812 4942 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763818 4942 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763825 4942 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763830 4942 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763835 4942 flags.go:64] FLAG: --cgroup-root="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763852 4942 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763858 4942 flags.go:64] FLAG: --client-ca-file="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763863 4942 flags.go:64] FLAG: --cloud-config="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763868 4942 flags.go:64] FLAG: --cloud-provider="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763873 4942 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763884 4942 flags.go:64] FLAG: --cluster-domain="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763889 4942 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763894 4942 flags.go:64] FLAG: --config-dir="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763899 4942 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763904 4942 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763914 4942 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763919 4942 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763925 4942 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763934 4942 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763939 4942 flags.go:64] FLAG: --contention-profiling="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763944 4942 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763948 4942 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763955 4942 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763960 4942 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763966 4942 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763970 4942 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763975 4942 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763979 4942 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763983 4942 flags.go:64] FLAG: --enable-server="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763988 4942 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764000 4942 flags.go:64] FLAG: --event-burst="100" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764006 4942 flags.go:64] FLAG: --event-qps="50" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764011 4942 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764016 4942 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764021 4942 flags.go:64] FLAG: --eviction-hard="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764029 4942 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764034 4942 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764039 4942 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764045 4942 flags.go:64] FLAG: --eviction-soft="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764050 4942 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764055 4942 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764069 4942 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764074 4942 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764078 4942 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764082 4942 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764087 4942 flags.go:64] FLAG: --feature-gates="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764093 4942 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764097 4942 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764102 4942 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764106 4942 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764111 4942 flags.go:64] FLAG: --healthz-port="10248" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764116 4942 flags.go:64] FLAG: --help="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764120 4942 flags.go:64] FLAG: --hostname-override="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764124 4942 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764130 4942 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764134 4942 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764139 4942 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764143 4942 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764148 4942 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764152 4942 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764156 4942 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764160 4942 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764164 4942 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764169 4942 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764173 4942 flags.go:64] FLAG: --kube-reserved="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764177 4942 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764181 4942 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764186 4942 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764190 4942 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764194 4942 flags.go:64] FLAG: --lock-file="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764198 4942 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764202 4942 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764207 4942 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764214 4942 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764218 4942 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764222 4942 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764227 4942 flags.go:64] FLAG: --logging-format="text" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764237 4942 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764242 4942 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764247 4942 flags.go:64] FLAG: --manifest-url="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764251 4942 flags.go:64] FLAG: --manifest-url-header="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764258 4942 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764263 4942 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764269 4942 flags.go:64] FLAG: --max-pods="110" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764274 4942 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764286 4942 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764290 4942 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764295 4942 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764300 4942 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764304 4942 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764309 4942 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764323 4942 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764327 4942 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764332 4942 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764337 4942 flags.go:64] FLAG: --pod-cidr="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764341 4942 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764347 4942 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764351 4942 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764356 4942 flags.go:64] FLAG: --pods-per-core="0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764360 4942 flags.go:64] FLAG: --port="10250" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764364 4942 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764368 4942 flags.go:64] FLAG: --provider-id="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764373 4942 flags.go:64] FLAG: --qos-reserved="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764377 4942 flags.go:64] FLAG: --read-only-port="10255" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764381 4942 flags.go:64] FLAG: --register-node="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764385 4942 flags.go:64] FLAG: --register-schedulable="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764390 4942 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764399 4942 flags.go:64] FLAG: --registry-burst="10" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764403 4942 flags.go:64] FLAG: --registry-qps="5" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764407 4942 flags.go:64] FLAG: --reserved-cpus="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764411 4942 flags.go:64] FLAG: --reserved-memory="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764418 4942 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764611 4942 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764623 4942 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764628 4942 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764632 4942 flags.go:64] FLAG: --runonce="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764636 4942 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764640 4942 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764646 4942 flags.go:64] FLAG: --seccomp-default="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764651 4942 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764655 4942 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764660 4942 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764667 4942 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764671 4942 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764675 4942 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764680 4942 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764685 4942 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764689 4942 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764693 4942 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764699 4942 flags.go:64] FLAG: --system-cgroups="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764703 4942 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764719 4942 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764724 4942 flags.go:64] FLAG: --tls-cert-file="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764728 4942 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764740 4942 flags.go:64] FLAG: --tls-min-version="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764745 4942 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764749 4942 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764753 4942 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764784 4942 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764803 4942 flags.go:64] FLAG: --v="2" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764812 4942 flags.go:64] FLAG: --version="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764820 4942 flags.go:64] FLAG: --vmodule="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764827 4942 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764831 4942 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765003 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765009 4942 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765013 4942 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765017 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765022 4942 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765033 4942 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765039 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765043 4942 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765046 4942 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765050 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765056 4942 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765059 4942 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765063 4942 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765067 4942 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765070 4942 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765073 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765077 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765080 4942 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765084 4942 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765088 4942 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765091 4942 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765094 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765099 4942 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765144 4942 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765149 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765154 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765158 4942 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765163 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765167 4942 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765172 4942 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765176 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765181 4942 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765185 4942 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765189 4942 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765193 4942 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765197 4942 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765202 4942 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765206 4942 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765256 4942 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765260 4942 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765264 4942 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765276 4942 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765282 4942 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765285 4942 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765289 4942 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765292 4942 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765296 4942 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765300 4942 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765304 4942 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765309 4942 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765313 4942 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765317 4942 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765321 4942 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765325 4942 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765329 4942 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765333 4942 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765338 4942 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765342 4942 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765346 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765350 4942 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765362 4942 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765366 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765369 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765373 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765376 4942 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765380 4942 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765383 4942 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765387 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765391 4942 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765394 4942 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765402 4942 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.765410 4942 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.782237 4942 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.782303 4942 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782413 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782423 4942 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782430 4942 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782435 4942 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782441 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782447 4942 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782455 4942 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782465 4942 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782471 4942 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782477 4942 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782483 4942 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782488 4942 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782494 4942 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782500 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782506 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782513 4942 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782519 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782525 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782533 4942 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782539 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782545 4942 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782550 4942 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782555 4942 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782561 4942 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782566 4942 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782571 4942 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782577 4942 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782582 4942 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782587 4942 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782592 4942 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782598 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782603 4942 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782611 4942 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782617 4942 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782624 4942 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782631 4942 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782637 4942 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782644 4942 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782650 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782660 4942 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782668 4942 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782677 4942 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782685 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782692 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782700 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782707 4942 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782714 4942 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782721 4942 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782727 4942 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782736 4942 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782745 4942 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782753 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782787 4942 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782801 4942 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782809 4942 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782815 4942 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782821 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782826 4942 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782832 4942 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782839 4942 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782844 4942 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782849 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782855 4942 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782862 4942 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782869 4942 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782876 4942 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782882 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782887 4942 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782892 4942 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782898 4942 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782903 4942 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.782914 4942 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783109 4942 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783118 4942 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783124 4942 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783130 4942 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783135 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783140 4942 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783145 4942 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783151 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783159 4942 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783169 4942 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783177 4942 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783184 4942 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783192 4942 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783199 4942 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783207 4942 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783212 4942 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783217 4942 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783222 4942 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783228 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783233 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783239 4942 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783244 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783249 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783254 4942 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783260 4942 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783265 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783272 4942 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783278 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783284 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783290 4942 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783295 4942 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783300 4942 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783305 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783311 4942 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783316 4942 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783321 4942 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783327 4942 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783332 4942 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783337 4942 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783342 4942 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783347 4942 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783353 4942 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783359 4942 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783364 4942 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783369 4942 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783374 4942 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783380 4942 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783385 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783391 4942 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783398 4942 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783403 4942 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783408 4942 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783414 4942 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783419 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783424 4942 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783429 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783436 4942 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783443 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783449 4942 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783455 4942 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783461 4942 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783467 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783473 4942 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783478 4942 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783484 4942 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783489 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783494 4942 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783499 4942 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783504 4942 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783509 4942 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783515 4942 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.783524 4942 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.784726 4942 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.792948 4942 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.793159 4942 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.795003 4942 server.go:997] "Starting client certificate rotation" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.795061 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.795328 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 22:01:25.886076315 +0000 UTC Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.795506 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.825099 4942 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.826985 4942 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.829357 4942 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.846753 4942 log.go:25] "Validated CRI v1 runtime API" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.884490 4942 log.go:25] "Validated CRI v1 image API" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.887174 4942 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.895456 4942 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-19-12-30-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.895504 4942 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.924865 4942 manager.go:217] Machine: {Timestamp:2026-02-18 19:17:20.920004752 +0000 UTC m=+0.624937487 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:15e4da6b-0b96-4412-ada2-f835d7e5f88a BootID:26ba8477-3134-4454-b1a3-81cc0f315017 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:88:f4:b2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:88:f4:b2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3d:2a:4a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ef:3a:9e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:62:2d:57 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4a:ea:0d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:2f:04:c9:87:aa Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9a:c4:64:70:dc:12 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.925332 4942 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.925637 4942 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.926259 4942 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.926631 4942 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.926686 4942 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.927068 4942 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.927089 4942 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.927655 4942 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.927713 4942 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.928106 4942 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.928266 4942 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933054 4942 kubelet.go:418] "Attempting to sync node with API server" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933113 4942 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933172 4942 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933198 4942 kubelet.go:324] "Adding apiserver pod source" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933229 4942 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.937203 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.937398 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.937732 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.937862 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.939930 4942 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.941102 4942 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.944108 4942 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.945988 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946051 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946074 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946092 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946127 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946146 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946164 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946197 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946218 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946237 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946295 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946314 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.947486 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.948340 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.948628 4942 server.go:1280] "Started kubelet" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.950158 4942 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.950145 4942 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.951940 4942 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 19:17:20 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952074 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952115 4942 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952319 4942 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.953152 4942 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952424 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:13:41.229528457 +0000 UTC Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952603 4942 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.952380 4942 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.954554 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.956335 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.954474 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="200ms" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.960212 4942 server.go:460] "Adding debug handlers to kubelet server" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.964120 4942 factory.go:55] Registering systemd factory Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.964239 4942 factory.go:221] Registration of the systemd container factory successfully Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.966070 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18956d5527dc0823 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:17:20.948537379 +0000 UTC m=+0.653470114,LastTimestamp:2026-02-18 19:17:20.948537379 +0000 UTC m=+0.653470114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971482 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971595 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971630 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971659 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971690 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971716 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971743 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971812 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971844 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971872 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971900 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971926 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971952 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971979 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972003 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972050 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972074 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972099 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972157 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972182 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972206 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972234 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972260 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972288 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972312 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972336 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972385 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972413 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972440 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972468 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972496 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972522 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972550 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972580 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972610 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972636 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972662 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972688 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972720 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972748 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972823 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972853 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972881 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972908 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972933 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972959 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973017 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973044 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973073 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973097 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973123 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973147 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973185 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973216 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973276 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973304 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973338 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973367 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973396 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973425 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973454 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973479 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973508 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973537 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973565 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973591 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973618 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973646 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973697 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973723 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973754 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973825 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973853 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973879 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973905 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973965 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973998 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973303 4942 factory.go:153] Registering CRI-O factory Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974113 4942 factory.go:221] Registration of the crio container factory successfully Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974035 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974283 4942 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974274 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974339 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974363 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974390 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974413 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974433 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974456 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974474 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974491 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974511 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974529 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974548 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974567 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974588 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974606 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974623 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974647 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974664 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974680 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974698 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974716 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974734 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974752 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974884 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974905 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974925 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974964 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974362 4942 factory.go:103] Registering Raw factory Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974988 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975162 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975043 4942 manager.go:1196] Started watching for new ooms in manager Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975218 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975383 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975413 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975443 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975469 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975494 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975521 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975546 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975572 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975593 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975614 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975638 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975663 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975683 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975742 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975791 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975815 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975840 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975862 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975887 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975908 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975933 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975953 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975973 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975993 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976017 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976041 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976061 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976085 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976107 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976129 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976154 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976177 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976201 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976223 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976245 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976283 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976303 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976321 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976341 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976362 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976383 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976403 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976423 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976485 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976506 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976526 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976547 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976566 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976585 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976604 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976624 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976643 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976662 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976679 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976702 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.978315 4942 manager.go:319] Starting recovery of all containers Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979000 4942 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979168 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979212 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979244 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979278 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979307 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979330 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979355 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979378 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979424 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979448 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979475 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979513 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979564 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979595 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979627 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979657 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979684 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979712 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979740 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979814 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979845 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979876 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979903 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979929 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979959 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979988 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980019 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980050 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980077 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980107 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980128 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980148 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980167 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980186 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980206 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980228 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980246 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980263 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980284 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980303 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980322 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980341 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980362 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980383 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980400 4942 reconstruct.go:97] "Volume reconstruction finished" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980472 4942 reconciler.go:26] "Reconciler: start to sync state" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.005214 4942 manager.go:324] Recovery completed Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.021296 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.023308 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.023363 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.023374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.024380 4942 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.024405 4942 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.024440 4942 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.031422 4942 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.034078 4942 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.034512 4942 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.034554 4942 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.034691 4942 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.036670 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.036754 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.049056 4942 policy_none.go:49] "None policy: Start" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.050269 4942 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.050329 4942 state_mem.go:35] "Initializing new in-memory state store" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.053839 4942 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.131315 4942 manager.go:334] "Starting Device Plugin manager" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.131738 4942 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.131752 4942 server.go:79] "Starting device plugin registration server" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.132301 4942 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.132318 4942 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.132853 4942 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.133095 4942 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.133111 4942 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.135500 4942 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.135678 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137293 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137562 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137748 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137815 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139238 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139295 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139317 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139504 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139538 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139710 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139844 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139877 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141173 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141592 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141629 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141796 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141922 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141959 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.143901 4942 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.144241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.144267 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.144278 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.145650 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.145682 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.145691 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.145881 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146210 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146253 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146921 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146950 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147208 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147291 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147353 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147387 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.148335 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.148374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.148387 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.157241 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="400ms" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183505 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183599 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183687 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183710 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183730 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183749 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183806 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183826 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183845 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183860 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183875 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183891 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183906 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183965 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.233677 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.235052 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.235087 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.235098 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.235122 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.235812 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.284862 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285163 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285256 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285329 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285409 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285487 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285353 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285500 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285098 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285426 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285566 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285556 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285814 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285850 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285879 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285908 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285972 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285990 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286015 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286051 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286062 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286098 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286107 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286156 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285925 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286264 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286410 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286410 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.289927 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.333900 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-beee5200fae558a39fda42c2ce66cd18696637ea9ca22dd80f1dfe753e4826ff WatchSource:0}: Error finding container beee5200fae558a39fda42c2ce66cd18696637ea9ca22dd80f1dfe753e4826ff: Status 404 returned error can't find the container with id beee5200fae558a39fda42c2ce66cd18696637ea9ca22dd80f1dfe753e4826ff Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.437086 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.438693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.438798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.438818 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.438865 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.439693 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.496461 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.519128 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-bd22406eca633f3eda21f4ba8b3c78ef17d4029502dd2e6d2f3becaee32dce29 WatchSource:0}: Error finding container bd22406eca633f3eda21f4ba8b3c78ef17d4029502dd2e6d2f3becaee32dce29: Status 404 returned error can't find the container with id bd22406eca633f3eda21f4ba8b3c78ef17d4029502dd2e6d2f3becaee32dce29 Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.524724 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.551844 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.558838 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="800ms" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.582255 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.600033 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-86abd5e6c59a8801c977070aaf2c8b8d3b2fe729948843999bc5b18915157a5a WatchSource:0}: Error finding container 86abd5e6c59a8801c977070aaf2c8b8d3b2fe729948843999bc5b18915157a5a: Status 404 returned error can't find the container with id 86abd5e6c59a8801c977070aaf2c8b8d3b2fe729948843999bc5b18915157a5a Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.800813 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.800941 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.838553 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.838686 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.839889 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.841582 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.841635 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.841653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.841691 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.842325 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.949850 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.954047 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:09:19.222309154 +0000 UTC Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.040220 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"beee5200fae558a39fda42c2ce66cd18696637ea9ca22dd80f1dfe753e4826ff"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.041898 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"86abd5e6c59a8801c977070aaf2c8b8d3b2fe729948843999bc5b18915157a5a"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.044124 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.044205 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"24c5ff3d077169128b674657bf2669ef0b4d72ad21d4062d7fa7f76aa83eaa2a"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.044453 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.046906 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.046966 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.046986 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.049715 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.049827 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6dfa8dbd907625a0d654282e664bb179d300b2c0437dda1c58b1ecb350c77ba"} Feb 18 19:17:22 crc kubenswrapper[4942]: W0218 19:17:22.050734 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.050892 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.051749 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bd22406eca633f3eda21f4ba8b3c78ef17d4029502dd2e6d2f3becaee32dce29"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.051917 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.052937 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.052995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.053019 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:22 crc kubenswrapper[4942]: W0218 19:17:22.139919 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.140007 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.359630 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="1.6s" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.642875 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.644321 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.644377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.644389 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.644425 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.645147 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.856290 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.858144 4942 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.949976 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.954201 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:44:07.621929573 +0000 UTC Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.056432 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09" exitCode=0 Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.056906 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.057130 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058062 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058196 4942 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="56411e12886c9a228da08cd2a84af4beda72fb5b0a8a51a10d38558853b1d748" exitCode=0 Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058253 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"56411e12886c9a228da08cd2a84af4beda72fb5b0a8a51a10d38558853b1d748"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058435 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.060124 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.060231 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.060387 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.060128 4942 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a" exitCode=0 Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061322 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061341 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061433 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061441 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.062158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.062201 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.062218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.063209 4942 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3" exitCode=0 Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.063276 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.063689 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.064700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.064739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.064756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.066869 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.066960 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.066974 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.067112 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.071660 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.071747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.071819 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: W0218 19:17:23.817453 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:23 crc kubenswrapper[4942]: E0218 19:17:23.817644 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.949611 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.954755 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:51:25.269150799 +0000 UTC Feb 18 19:17:23 crc kubenswrapper[4942]: E0218 19:17:23.960377 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="3.2s" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.073617 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.073782 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.074619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.074645 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.074657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.078773 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.078812 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.078825 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.078945 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.079997 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.080024 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.080036 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.089524 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.089575 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.089592 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.089605 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.091400 4942 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d9c537e3da2b2161286e254413b53d277aa3f40704439fabadbc37848f2b2fc7" exitCode=0 Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.091546 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.091561 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.091543 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d9c537e3da2b2161286e254413b53d277aa3f40704439fabadbc37848f2b2fc7"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092735 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092744 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092771 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092777 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092784 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.245330 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.246864 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.246909 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.246922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.246947 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:24 crc kubenswrapper[4942]: E0218 19:17:24.247486 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.713828 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.720134 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.955017 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:18:53.171483231 +0000 UTC Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.096629 4942 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="05c476815b1a0d0fcac36ccf894fb3b31e2829b84816a3da48e1f6bbcb476065" exitCode=0 Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.096781 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"05c476815b1a0d0fcac36ccf894fb3b31e2829b84816a3da48e1f6bbcb476065"} Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.096823 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.097918 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.098023 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.098088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.100018 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.100041 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f"} Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.100118 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.100133 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101211 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101233 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101243 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101350 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.102163 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.102197 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.102209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.850235 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.955390 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:13:29.344341464 +0000 UTC Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.107034 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108010 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4c8a67008b781ea71caada1442830007b0bd3da48a88497babddf482144bfec0"} Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108103 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"76699f1fec7e64b06e2cc8478d06b157701ccfb88e09c32be80176f7ff7036b6"} Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108127 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a85175aa81681c668f7d94ca0deceeac84a65b61bcca1eea90227320748655e7"} Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9bfe09f5c6e255c5b82f078a14b9a0e6d1e9160a992d135aa89d3b64899315ea"} Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108255 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108293 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109293 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109336 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109352 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109951 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109999 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.670015 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.776962 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.956322 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:42:48.466110864 +0000 UTC Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.015700 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.115200 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4104133a81e916b706c0e0f75486e2e71f4f98f4329b84ec1320e500f810fbfc"} Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.115272 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.115342 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.115553 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116368 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116395 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116403 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116559 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116611 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.117330 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.117361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.117371 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.448092 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.449920 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.449998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.450019 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.450064 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.659367 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.659662 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.661557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.661616 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.661642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.957298 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:25:43.190270707 +0000 UTC Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.118603 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.119239 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.121195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.121251 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.121266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.121953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.122030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.122064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.848332 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.848675 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.850525 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.850613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.850635 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.958146 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:25:52.618003459 +0000 UTC Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.179669 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.180050 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.182002 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.182067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.182085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.959060 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:07:30.682345415 +0000 UTC Feb 18 19:17:30 crc kubenswrapper[4942]: I0218 19:17:30.959591 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:04:43.431687801 +0000 UTC Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.014923 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.015227 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.018263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.018315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.018338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:31 crc kubenswrapper[4942]: E0218 19:17:31.144040 4942 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.154083 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.154297 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.155834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.155905 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.155929 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.960460 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:27:02.853296037 +0000 UTC Feb 18 19:17:32 crc kubenswrapper[4942]: I0218 19:17:32.960682 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:30:32.187339887 +0000 UTC Feb 18 19:17:33 crc kubenswrapper[4942]: I0218 19:17:33.961388 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:09:33.8663889 +0000 UTC Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.154266 4942 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.154394 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:17:34 crc kubenswrapper[4942]: W0218 19:17:34.789479 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.789637 4942 trace.go:236] Trace[1125017472]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:17:24.788) (total time: 10001ms): Feb 18 19:17:34 crc kubenswrapper[4942]: Trace[1125017472]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:17:34.789) Feb 18 19:17:34 crc kubenswrapper[4942]: Trace[1125017472]: [10.001521808s] [10.001521808s] END Feb 18 19:17:34 crc kubenswrapper[4942]: E0218 19:17:34.789680 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 19:17:34 crc kubenswrapper[4942]: W0218 19:17:34.864977 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.865094 4942 trace.go:236] Trace[260678053]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:17:24.863) (total time: 10001ms): Feb 18 19:17:34 crc kubenswrapper[4942]: Trace[260678053]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:17:34.864) Feb 18 19:17:34 crc kubenswrapper[4942]: Trace[260678053]: [10.001283813s] [10.001283813s] END Feb 18 19:17:34 crc kubenswrapper[4942]: E0218 19:17:34.865127 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.950950 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.962520 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:37:38.747421841 +0000 UTC Feb 18 19:17:35 crc kubenswrapper[4942]: W0218 19:17:35.213482 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.213632 4942 trace.go:236] Trace[372135767]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:17:25.211) (total time: 10001ms): Feb 18 19:17:35 crc kubenswrapper[4942]: Trace[372135767]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:17:35.213) Feb 18 19:17:35 crc kubenswrapper[4942]: Trace[372135767]: [10.001607131s] [10.001607131s] END Feb 18 19:17:35 crc kubenswrapper[4942]: E0218 19:17:35.213667 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.407325 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.407417 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.430880 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.430959 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.962669 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:56:00.678431319 +0000 UTC Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.050627 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.050910 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.052051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.052090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.052103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.089263 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.145071 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.146233 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.146326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.146352 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.181958 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.674102 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.674250 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.675587 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.675649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.675665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.783462 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.783728 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.785427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.785475 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.785492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.788783 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.963507 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:33:39.616257798 +0000 UTC Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.148262 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.148339 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.148339 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.149992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150091 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150435 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.964510 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:26:05.66862836 +0000 UTC Feb 18 19:17:38 crc kubenswrapper[4942]: I0218 19:17:38.965045 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:23:20.409513321 +0000 UTC Feb 18 19:17:39 crc kubenswrapper[4942]: I0218 19:17:39.965730 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:18:14.771144861 +0000 UTC Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.161158 4942 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:40 crc kubenswrapper[4942]: E0218 19:17:40.399245 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.402130 4942 trace.go:236] Trace[188569331]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:17:29.443) (total time: 10958ms): Feb 18 19:17:40 crc kubenswrapper[4942]: Trace[188569331]: ---"Objects listed" error: 10958ms (19:17:40.401) Feb 18 19:17:40 crc kubenswrapper[4942]: Trace[188569331]: [10.958232127s] [10.958232127s] END Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.402172 4942 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.404462 4942 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 19:17:40 crc kubenswrapper[4942]: E0218 19:17:40.404560 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.422742 4942 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.440032 4942 csr.go:261] certificate signing request csr-ccn9p is approved, waiting to be issued Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.455326 4942 csr.go:257] certificate signing request csr-ccn9p is issued Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.472800 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.472874 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.472884 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.472959 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.477955 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33612->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.478037 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33612->192.168.126.11:17697: read: connection reset by peer" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.755876 4942 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.796112 4942 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 19:17:40 crc kubenswrapper[4942]: W0218 19:17:40.796808 4942 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:17:40 crc kubenswrapper[4942]: E0218 19:17:40.796676 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": read tcp 38.102.83.188:54284->38.102.83.188:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-apiserver-crc.18956d55691e14e1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:17:22.043385057 +0000 UTC m=+1.748317732,LastTimestamp:2026-02-18 19:17:22.043385057 +0000 UTC m=+1.748317732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:17:40 crc kubenswrapper[4942]: W0218 19:17:40.796823 4942 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:17:40 crc kubenswrapper[4942]: W0218 19:17:40.796823 4942 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.966181 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:38:30.097128882 +0000 UTC Feb 18 19:17:41 crc kubenswrapper[4942]: E0218 19:17:41.145123 4942 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.160635 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.162524 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f" exitCode=255 Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.162565 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f"} Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.162726 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.163651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.163680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.163688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.164477 4942 scope.go:117] "RemoveContainer" containerID="5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.254402 4942 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.261860 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.267263 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.457917 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 19:12:40 +0000 UTC, rotation deadline is 2026-12-26 19:06:59.826001153 +0000 UTC Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.457970 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7463h49m18.368034067s for next certificate rotation Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.945977 4942 apiserver.go:52] "Watching apiserver" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.952627 4942 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953064 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-5pgvt","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953529 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953558 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953617 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953695 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:41 crc kubenswrapper[4942]: E0218 19:17:41.953903 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:41 crc kubenswrapper[4942]: E0218 19:17:41.953993 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.954246 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.954289 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.954303 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:41 crc kubenswrapper[4942]: E0218 19:17:41.954359 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.956830 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.956946 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957050 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957085 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957242 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957554 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957723 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.958293 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.958360 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.958591 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.958607 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.960730 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.966307 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:42:00.198047894 +0000 UTC Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.989028 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.001369 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016231 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016364 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016704 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016945 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017060 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017222 4942 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017235 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017304 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017332 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017363 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017387 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017403 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.018037 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.018245 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.018787 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.026021 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.027832 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.032007 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.032051 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.032066 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.032181 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.532133907 +0000 UTC m=+22.237066782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.034955 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.035726 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.035749 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.035795 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.035924 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.5359066 +0000 UTC m=+22.240839265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.037294 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.037569 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.040011 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.043242 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.048642 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.054793 4942 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.059261 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.069661 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.077878 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118108 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118163 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118189 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118212 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118239 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118261 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118285 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118308 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118333 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118356 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118378 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118401 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118423 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118448 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118469 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118533 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118556 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118579 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118600 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118624 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118646 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118670 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118694 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118716 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118738 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118782 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118809 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118831 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118853 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118877 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118920 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118939 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118964 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118984 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119005 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119028 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119048 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119068 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119090 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119110 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119131 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119150 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119181 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119201 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119221 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119244 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119265 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119285 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119306 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119345 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119365 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119385 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119405 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119441 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119460 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119481 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119503 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119490 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119526 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119550 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119572 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119592 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119614 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119637 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119658 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119680 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119702 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119722 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119729 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119747 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119793 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119815 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119836 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119858 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119883 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119904 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119927 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119947 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119948 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119968 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119992 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120015 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120037 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120173 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120198 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120220 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120241 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120263 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120273 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120321 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120342 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120363 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120384 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120404 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120424 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120445 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120466 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120487 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120507 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120528 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120549 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120570 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120590 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120613 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120634 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120656 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120676 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120697 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120720 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120740 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120785 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120809 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120831 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120852 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120875 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120897 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120963 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120985 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121006 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121030 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121052 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121074 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121095 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121117 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121139 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121160 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121181 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121202 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121224 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121246 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121268 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121293 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121317 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121339 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121360 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121383 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121407 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121433 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121455 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121476 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121498 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121520 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121546 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121567 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121588 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121611 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121632 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121659 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121681 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121703 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121726 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121748 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121785 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121810 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121833 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121855 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121879 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121901 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121922 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121945 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121969 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121990 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122014 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122036 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122059 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122081 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122103 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122125 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122149 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122173 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122195 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122218 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122242 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122265 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122288 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122312 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122336 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122358 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122385 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122404 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122430 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122454 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122478 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122501 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122525 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122548 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122554 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122579 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122602 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122626 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122649 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122672 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122697 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122706 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122723 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122747 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122787 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122810 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122837 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122859 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122883 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122954 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122983 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123028 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123144 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f163820b-df8b-4e07-9b74-d5f3332580a6-hosts-file\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123167 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjg6z\" (UniqueName: \"kubernetes.io/projected/f163820b-df8b-4e07-9b74-d5f3332580a6-kube-api-access-pjg6z\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123220 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123284 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123301 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123335 4942 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123350 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123365 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123379 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123436 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123490 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123713 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123779 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124449 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124575 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124658 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124685 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124873 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125042 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125101 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125119 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125284 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125508 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.128566 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.128421 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125600 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125602 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.128876 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.129011 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.130134 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.130203 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.130393 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.130435 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.131224 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.131401 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.131397 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.132037 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.132305 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.132493 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.132879 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125623 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.134164 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.134713 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.134751 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.135229 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.135294 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.136584 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.136599 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.137141 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.138426 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139099 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139156 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139424 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139439 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139497 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139746 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.140008 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.140045 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.133687 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.140278 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.140303 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.140445 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.640399041 +0000 UTC m=+22.345331696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.140492 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.133811 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.141454 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.141627 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.141826 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.141962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.142039 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.142294 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.142428 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.154549 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.154885 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.155552 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.155421 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.155871 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.156087 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.656054108 +0000 UTC m=+22.360986773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.156132 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.156545 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.156968 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157119 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157360 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157407 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157632 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157505 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157698 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158026 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158113 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158450 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158683 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158721 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.160539 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.160865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161155 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161198 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161210 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161306 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161630 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161797 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161951 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161992 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.162398 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.162541 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.162803 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.163131 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.163346 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.163435 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.163633 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.164017 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.164412 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.164951 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.165025 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.165502 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.165607 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.165644 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.665613684 +0000 UTC m=+22.370546369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.165694 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166052 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166356 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166605 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166694 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167123 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167140 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167203 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167237 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.135283 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167504 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167511 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.168745 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.168752 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.168865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169040 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169250 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169290 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.168977 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.136359 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169677 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169891 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169942 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170133 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170507 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170677 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170716 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170982 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.171086 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.171462 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.172213 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.171754 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.172266 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.173517 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.173832 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.174064 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.176218 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.176623 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.176820 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.176900 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.177299 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.178422 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8jfwb"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.178826 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2rbc4"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.179381 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wqxh4"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.179750 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.180623 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.180703 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.184545 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.184610 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.184902 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190337 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190420 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190527 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190712 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190878 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191033 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191109 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191207 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191229 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191342 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191458 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191465 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191997 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.192604 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.192738 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.192796 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.193267 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.193320 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.195903 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199061 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199534 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199671 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199741 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199833 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200066 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200199 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" exitCode=255 Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200430 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8"} Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200511 4942 scope.go:117] "RemoveContainer" containerID="5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200665 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200773 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200983 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.201371 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.201633 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.201726 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.202247 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.202784 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.203138 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.203398 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.203571 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.203668 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.207731 4942 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.208415 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.209521 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.209661 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.210405 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.211453 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.211739 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.212450 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.213556 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.213699 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.216081 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.218134 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.219123 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.219433 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.221088 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.221325 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.222639 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.222711 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.223864 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224275 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f163820b-df8b-4e07-9b74-d5f3332580a6-hosts-file\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224307 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjg6z\" (UniqueName: \"kubernetes.io/projected/f163820b-df8b-4e07-9b74-d5f3332580a6-kube-api-access-pjg6z\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224352 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224367 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224378 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224549 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f163820b-df8b-4e07-9b74-d5f3332580a6-hosts-file\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225076 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225107 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225121 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225131 4942 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225146 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225157 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225170 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225183 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225193 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225203 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225214 4942 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225224 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225233 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225243 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225255 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225265 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225276 4942 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225290 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225302 4942 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225313 4942 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225323 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225333 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225345 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225355 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225367 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225380 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225394 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225405 4942 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225415 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225429 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225440 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225450 4942 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225461 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225472 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225482 4942 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225494 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225503 4942 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225514 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225525 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225542 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225554 4942 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225565 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225579 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225592 4942 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225603 4942 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225618 4942 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225629 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225640 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225652 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225719 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225731 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225742 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225753 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225780 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225791 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225802 4942 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225812 4942 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225823 4942 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225833 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225844 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225856 4942 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225867 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225877 4942 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225887 4942 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225897 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225909 4942 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225920 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225933 4942 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225944 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225957 4942 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225968 4942 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225978 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225988 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225998 4942 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226009 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226019 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226030 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226040 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226050 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226061 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226071 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226082 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226092 4942 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226103 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226113 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226124 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226134 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226144 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226154 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226163 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226173 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226183 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226193 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226203 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226213 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226223 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226233 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226244 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226255 4942 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226265 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226276 4942 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226287 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226298 4942 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226309 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226320 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226335 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226347 4942 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226358 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226372 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226351 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226385 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226464 4942 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226486 4942 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226500 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226538 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226551 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226562 4942 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226573 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226584 4942 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226595 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226606 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226617 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226628 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226642 4942 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226653 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226665 4942 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226676 4942 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226689 4942 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226702 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226714 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226728 4942 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226742 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226809 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226824 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226837 4942 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226850 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226861 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226875 4942 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226887 4942 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226900 4942 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226913 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226925 4942 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226939 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226952 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226966 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226980 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226997 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227013 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227028 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227041 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227055 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227068 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227079 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227092 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227106 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227655 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227672 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227726 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227739 4942 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227752 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227780 4942 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227793 4942 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227805 4942 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227818 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227830 4942 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227842 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227855 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227872 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227883 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227895 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227907 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227919 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227930 4942 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227943 4942 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227954 4942 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227966 4942 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227978 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227990 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.228007 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.228021 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.241545 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.241701 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.245686 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjg6z\" (UniqueName: \"kubernetes.io/projected/f163820b-df8b-4e07-9b74-d5f3332580a6-kube-api-access-pjg6z\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.249365 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.251513 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.251827 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.262224 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.269641 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.272694 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.273126 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.277336 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.284069 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.293923 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.294315 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.306043 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.306474 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.307055 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.308516 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.321718 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329688 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-system-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329727 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-daemon-config\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329747 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cnibin\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329781 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27v7m\" (UniqueName: \"kubernetes.io/projected/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-kube-api-access-27v7m\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329801 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-hostroot\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28921539-823a-4439-a230-3b5aed7085cc-proxy-tls\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329832 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329854 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329974 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28921539-823a-4439-a230-3b5aed7085cc-mcd-auth-proxy-config\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330017 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-netns\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330037 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-system-cni-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330054 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-multus\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330073 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330092 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-multus-certs\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330107 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65c5q\" (UniqueName: \"kubernetes.io/projected/75150b8c-7a02-497b-86c3-eabc9c8dbc55-kube-api-access-65c5q\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330122 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-socket-dir-parent\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330142 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-os-release\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330157 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-k8s-cni-cncf-io\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330173 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330187 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cnibin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330202 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-bin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330233 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-conf-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330251 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2zj5\" (UniqueName: \"kubernetes.io/projected/28921539-823a-4439-a230-3b5aed7085cc-kube-api-access-c2zj5\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330275 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-kubelet\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330289 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-etc-kubernetes\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330304 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28921539-823a-4439-a230-3b5aed7085cc-rootfs\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330321 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cni-binary-copy\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330357 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-os-release\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330381 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330392 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330402 4942 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330412 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330422 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.337002 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.351181 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.363854 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.378573 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.390150 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.399069 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.408371 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.419807 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:40Z\\\",\\\"message\\\":\\\"-1433084409/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771442244\\\\\\\\\\\\\\\" (2026-02-18 19:17:23 +0000 UTC to 2026-03-20 19:17:24 +0000 UTC (now=2026-02-18 19:17:40.45438601 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454440 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 19:17:40.454315 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 19:17:40.454727 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 19:17:40.454262 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1433084409/tls.crt::/tmp/serving-cert-1433084409/tls.key\\\\\\\"\\\\nI0218 19:17:40.454787 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771442254\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771442254\\\\\\\\\\\\\\\" (2026-02-18 18:17:34 +0000 UTC to 2027-02-18 18:17:34 +0000 UTC (now=2026-02-18 19:17:40.454709698 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454828 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 19:17:40.454834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 19:17:40.454852 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 19:17:40.454856 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 19:17:40.454883 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 19:17:40.455174 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0218 19:17:40.456995 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.429479 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430788 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2zj5\" (UniqueName: \"kubernetes.io/projected/28921539-823a-4439-a230-3b5aed7085cc-kube-api-access-c2zj5\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430842 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-conf-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430862 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-kubelet\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430887 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-etc-kubernetes\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430907 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28921539-823a-4439-a230-3b5aed7085cc-rootfs\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430944 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cni-binary-copy\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430979 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-os-release\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431015 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-system-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431033 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-daemon-config\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431048 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cnibin\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431101 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-conf-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431273 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-etc-kubernetes\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431333 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-kubelet\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431375 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28921539-823a-4439-a230-3b5aed7085cc-rootfs\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cnibin\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431622 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-system-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431683 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-os-release\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.432224 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cni-binary-copy\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.432585 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-daemon-config\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.432843 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27v7m\" (UniqueName: \"kubernetes.io/projected/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-kube-api-access-27v7m\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.433895 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-hostroot\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.433925 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28921539-823a-4439-a230-3b5aed7085cc-proxy-tls\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.433947 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28921539-823a-4439-a230-3b5aed7085cc-mcd-auth-proxy-config\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.434000 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-hostroot\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.435264 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28921539-823a-4439-a230-3b5aed7085cc-mcd-auth-proxy-config\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436699 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436803 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436847 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-netns\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436870 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-system-cni-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436935 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-multus\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436958 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437120 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-multus-certs\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437159 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65c5q\" (UniqueName: \"kubernetes.io/projected/75150b8c-7a02-497b-86c3-eabc9c8dbc55-kube-api-access-65c5q\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437186 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-socket-dir-parent\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437245 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-k8s-cni-cncf-io\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437275 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-os-release\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437301 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cnibin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437330 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-bin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437509 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437585 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-socket-dir-parent\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437626 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-k8s-cni-cncf-io\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437679 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-os-release\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437724 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cnibin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437794 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-netns\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437812 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-bin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437853 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-multus\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437895 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-system-cni-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437980 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-multus-certs\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.438547 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.439774 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.439983 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.448438 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28921539-823a-4439-a230-3b5aed7085cc-proxy-tls\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.451315 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27v7m\" (UniqueName: \"kubernetes.io/projected/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-kube-api-access-27v7m\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.452873 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2zj5\" (UniqueName: \"kubernetes.io/projected/28921539-823a-4439-a230-3b5aed7085cc-kube-api-access-c2zj5\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.455416 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65c5q\" (UniqueName: \"kubernetes.io/projected/75150b8c-7a02-497b-86c3-eabc9c8dbc55-kube-api-access-65c5q\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.458816 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.518646 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.525558 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.538258 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.538296 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538434 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538450 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538462 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538519 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.538499183 +0000 UTC m=+23.243431848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.538545 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538599 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538640 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538658 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538749 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.538722649 +0000 UTC m=+23.243655344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.546617 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89fzv"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.549549 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.551152 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.551638 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.551854 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.551949 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.552633 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.553051 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.553047 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.561318 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.576073 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.590609 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.600031 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.611868 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.624857 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.637601 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639697 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639785 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639813 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639828 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639885 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639906 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639928 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639945 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639969 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640007 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640023 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640045 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640060 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640078 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640099 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640114 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640133 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640152 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640175 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.649286 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.660228 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.670854 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.687032 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.697991 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.711597 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:40Z\\\",\\\"message\\\":\\\"-1433084409/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771442244\\\\\\\\\\\\\\\" (2026-02-18 19:17:23 +0000 UTC to 2026-03-20 19:17:24 +0000 UTC (now=2026-02-18 19:17:40.45438601 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454440 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 19:17:40.454315 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 19:17:40.454727 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 19:17:40.454262 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1433084409/tls.crt::/tmp/serving-cert-1433084409/tls.key\\\\\\\"\\\\nI0218 19:17:40.454787 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771442254\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771442254\\\\\\\\\\\\\\\" (2026-02-18 18:17:34 +0000 UTC to 2027-02-18 18:17:34 +0000 UTC (now=2026-02-18 19:17:40.454709698 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454828 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 19:17:40.454834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 19:17:40.454852 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 19:17:40.454856 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 19:17:40.454883 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 19:17:40.455174 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0218 19:17:40.456995 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741112 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741199 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741222 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741241 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741259 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741290 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.741257711 +0000 UTC m=+23.446190376 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741307 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741335 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741358 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741362 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741393 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741406 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741430 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741431 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741467 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741489 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741505 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741543 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741573 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741593 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741647 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741668 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741683 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741746 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741780 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741799 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741804 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741817 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741841 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741879 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.741863046 +0000 UTC m=+23.446795711 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741908 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741954 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741986 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.741978699 +0000 UTC m=+23.446911364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.742007 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.742057 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.742077 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.742059 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743661 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743682 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743718 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743867 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743884 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.744015 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.744206 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.745775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.746591 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.767298 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.870744 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.966737 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:11:17.901458771 +0000 UTC Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.035372 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.035545 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.040253 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.041241 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.042716 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.043788 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.044442 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.045037 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.045634 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.046225 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.046925 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.047448 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.047986 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.048664 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.049206 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.049745 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.050265 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.050815 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.051414 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.051910 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.052490 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.053101 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.054623 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.055558 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.056098 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.056857 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.057354 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.058002 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.058654 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.059219 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.059871 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.060350 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.060863 4942 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.060971 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.062403 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.063024 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.063541 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.068684 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.069489 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.070451 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.071214 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.072296 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.072757 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.073973 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.074637 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.075589 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.076082 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.077001 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.077541 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.078641 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.079204 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.080395 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.080875 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.081429 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.082625 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.083127 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.206671 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.206730 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.206744 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cc43bdfa8f87b18c190d672f65ec19dd854a057cb070b3b7e69d0c61de7de1b1"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.208049 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" exitCode=0 Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.208110 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.208157 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"9d4b5c04c361e209886b1bb004385933e7d66c1477df3ba1ff39b92720286780"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.215639 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.215693 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"795f7eedc1033efe306a5370120d08da83424ccdc74730cd7ad43f9f0455be94"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.219479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.219525 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.219537 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"423e7dd637f41bd59e9f4610d40651483c31e98ed1a93cc5a3b51823c029a0da"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.221216 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.221255 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9f98ead60d7d7388ed8e2f826325cdf4fb3f733d0c86b21634a5a15f4660b1dc"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.222609 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5pgvt" event={"ID":"f163820b-df8b-4e07-9b74-d5f3332580a6","Type":"ContainerStarted","Data":"97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.222665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5pgvt" event={"ID":"f163820b-df8b-4e07-9b74-d5f3332580a6","Type":"ContainerStarted","Data":"2b2efaa19b8957c73861f12e23848fb6ad4f5187a5b63fc0525873d9908beb87"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.224267 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.226392 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.226701 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.226956 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.227731 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7" exitCode=0 Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.227802 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.227822 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerStarted","Data":"ebb430bd7e3fcbe29a36455e3bd0b6b975dcd2edfe5d779405ff6d6129a46903"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.229836 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ddb883da8855a447ab89d150d48183c16c8676db0c8a228fdca5f0546356c698"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.240507 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.254499 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.270822 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.282992 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.308884 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.325380 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.341231 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:40Z\\\",\\\"message\\\":\\\"-1433084409/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771442244\\\\\\\\\\\\\\\" (2026-02-18 19:17:23 +0000 UTC to 2026-03-20 19:17:24 +0000 UTC (now=2026-02-18 19:17:40.45438601 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454440 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 19:17:40.454315 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 19:17:40.454727 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 19:17:40.454262 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1433084409/tls.crt::/tmp/serving-cert-1433084409/tls.key\\\\\\\"\\\\nI0218 19:17:40.454787 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771442254\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771442254\\\\\\\\\\\\\\\" (2026-02-18 18:17:34 +0000 UTC to 2027-02-18 18:17:34 +0000 UTC (now=2026-02-18 19:17:40.454709698 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454828 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 19:17:40.454834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 19:17:40.454852 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 19:17:40.454856 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 19:17:40.454883 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 19:17:40.455174 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0218 19:17:40.456995 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.359372 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.372705 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.389226 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.400360 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.426305 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.439027 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.452627 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.466980 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.481904 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.501710 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.515139 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.534568 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.550332 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.550371 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550506 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550525 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550541 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550584 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.55056812 +0000 UTC m=+25.255500775 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550684 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550739 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550808 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550899 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.550877928 +0000 UTC m=+25.255810593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.559790 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.576701 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.598130 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.609860 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.627586 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.642229 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.752143 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.752387 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.752348854 +0000 UTC m=+25.457281519 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.752798 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.752831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.752940 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.753011 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.75299498 +0000 UTC m=+25.457927655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.753021 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.753100 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.753091092 +0000 UTC m=+25.458023757 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.967634 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:12:22.137867181 +0000 UTC Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.035219 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.035291 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:44 crc kubenswrapper[4942]: E0218 19:17:44.035398 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:44 crc kubenswrapper[4942]: E0218 19:17:44.035462 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.238282 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3" exitCode=0 Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.238385 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244542 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244585 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244597 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244607 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244616 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.284674 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.314851 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.342828 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.367838 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.382984 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.455078 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.477960 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.495702 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.512963 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.536048 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.549016 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.563586 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.579413 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.968748 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:25:30.469713948 +0000 UTC Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.035437 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.035605 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.214265 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wxck8"] Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.214721 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.216963 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.217224 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.217400 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.217572 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.231359 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.250335 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.252170 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80" exitCode=0 Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.252209 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80"} Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.259220 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.272718 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.287134 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.310971 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.330892 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.348825 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.363892 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.369133 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscpp\" (UniqueName: \"kubernetes.io/projected/69ef2748-687e-4223-998e-7bd92ad8aaaf-kube-api-access-vscpp\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.369165 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/69ef2748-687e-4223-998e-7bd92ad8aaaf-serviceca\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.369204 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69ef2748-687e-4223-998e-7bd92ad8aaaf-host\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.375257 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.389002 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.403272 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.420479 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.434904 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.450032 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.464594 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.470066 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscpp\" (UniqueName: \"kubernetes.io/projected/69ef2748-687e-4223-998e-7bd92ad8aaaf-kube-api-access-vscpp\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.470108 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/69ef2748-687e-4223-998e-7bd92ad8aaaf-serviceca\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.470129 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69ef2748-687e-4223-998e-7bd92ad8aaaf-host\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.470217 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69ef2748-687e-4223-998e-7bd92ad8aaaf-host\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.472269 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/69ef2748-687e-4223-998e-7bd92ad8aaaf-serviceca\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.487504 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.497131 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscpp\" (UniqueName: \"kubernetes.io/projected/69ef2748-687e-4223-998e-7bd92ad8aaaf-kube-api-access-vscpp\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.505382 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.516743 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.534195 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.539324 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.558336 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.570878 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.571200 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.571244 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571386 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571413 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571426 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571477 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.571460042 +0000 UTC m=+29.276392707 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571393 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571500 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571512 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571560 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.571541944 +0000 UTC m=+29.276474609 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.584196 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.596972 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.615207 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.628083 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.643277 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.669348 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.688438 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.774360 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.774549 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.774587 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774633 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.774590549 +0000 UTC m=+29.479523214 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774750 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774862 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.774840735 +0000 UTC m=+29.479773590 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774777 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774964 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.774951388 +0000 UTC m=+29.479884263 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.851269 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.852469 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.852636 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.970525 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:46:27.534913586 +0000 UTC Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.035569 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.035603 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.035840 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.035987 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.259644 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wxck8" event={"ID":"69ef2748-687e-4223-998e-7bd92ad8aaaf","Type":"ContainerStarted","Data":"2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.259732 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wxck8" event={"ID":"69ef2748-687e-4223-998e-7bd92ad8aaaf","Type":"ContainerStarted","Data":"530f4ea3ed961092e14f800152879b7dd96034db958da0cc81eb74d156e31a47"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.264227 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da" exitCode=0 Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.264332 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.266837 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.281496 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.305491 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.318456 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.342576 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.360787 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.380562 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.393029 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.394480 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.394749 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.398594 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.417181 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.445225 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.466451 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.487114 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.500396 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.520189 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.536179 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.557919 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.573832 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.593219 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.609468 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.623677 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.637632 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.654647 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.671796 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.690542 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.715029 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.732894 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.759999 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.781429 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.794633 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.805462 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.807956 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.808018 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.808033 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.808231 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.817636 4942 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.817986 4942 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819295 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819344 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819360 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819398 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.844996 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853188 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853206 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853248 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.871327 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.876787 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.876963 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.877297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.877557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.877684 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.895498 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.900802 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.900998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.901108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.901261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.901560 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.924373 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929638 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929672 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.943707 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.943892 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945879 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945914 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945927 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945959 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.971287 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:19:27.736138032 +0000 UTC Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.035004 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:47 crc kubenswrapper[4942]: E0218 19:17:47.035217 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048348 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048378 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048389 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151500 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151637 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255845 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255870 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255903 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255925 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.278049 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.281860 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703" exitCode=0 Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.282505 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.309544 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.335987 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.355988 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358879 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358955 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358971 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.373614 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.380633 4942 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.387661 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.399566 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.414096 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.428941 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.447613 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463353 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463393 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463362 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463601 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.478017 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.490787 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.504214 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.519358 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566271 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566289 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566299 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671386 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671436 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671450 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671471 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671484 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775402 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775470 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775491 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879559 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879592 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.971837 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:36:35.878794846 +0000 UTC Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.982953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.982994 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.983007 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.983070 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.983087 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.034869 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.034902 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:48 crc kubenswrapper[4942]: E0218 19:17:48.035648 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:48 crc kubenswrapper[4942]: E0218 19:17:48.035904 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086565 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086629 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086654 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086670 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.193992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.194392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.194571 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.194689 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.194850 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.290556 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168" exitCode=0 Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.290622 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.296753 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.297054 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.297205 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.297365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.297508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.315302 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.335036 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.359579 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.378537 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401138 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401240 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401308 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401332 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.407351 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.428718 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.450462 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.473097 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.489112 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504285 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504336 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504356 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504398 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.507271 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.522594 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.534148 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.549160 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.567430 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608011 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608105 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608115 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710779 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710791 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710821 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814080 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814102 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814116 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917124 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.972564 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:21:24.585700294 +0000 UTC Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020707 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020789 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020813 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020830 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.035720 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.035931 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124121 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124213 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124276 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.227475 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.227867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.228346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.228830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.229001 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.300331 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerStarted","Data":"d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.309122 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.309850 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.310094 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.323288 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331262 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331287 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331318 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331337 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.346243 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.360623 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.369349 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.383844 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.408716 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.433984 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.434020 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.434030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.434049 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.434063 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.444969 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.463957 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.481125 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.500878 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.514239 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.528417 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.536892 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.536968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.536983 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.537003 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.537017 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.542670 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.560486 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.572599 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.586391 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.596512 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.612123 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.617455 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.617524 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.617740 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.617806 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.617827 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.617918 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.617886231 +0000 UTC m=+37.322818936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.618400 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.618423 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.618432 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.618464 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.618455325 +0000 UTC m=+37.323387990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.627000 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639368 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639443 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639471 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639493 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.644450 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.659575 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.673380 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.688866 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.702902 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.715958 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.733079 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742393 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742436 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742466 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742478 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.754145 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.772548 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.791892 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.820070 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.820347 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.820394 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.820610 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.820801 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.820665199 +0000 UTC m=+37.525597864 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.821154 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.821417 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.821241363 +0000 UTC m=+37.526174048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.821735 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.821715165 +0000 UTC m=+37.526647830 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845853 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845895 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845907 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845933 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.949112 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.949698 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.949855 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.950010 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.950125 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.973478 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:06:04.5697286 +0000 UTC Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.005319 4942 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.035058 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.035071 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:50 crc kubenswrapper[4942]: E0218 19:17:50.035594 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:50 crc kubenswrapper[4942]: E0218 19:17:50.035902 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052804 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052926 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052942 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.155933 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.156372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.156499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.156623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.156687 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259544 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259603 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259680 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.312666 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.348894 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361849 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361987 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.364167 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.378154 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.397915 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.413894 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.434848 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465599 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465618 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465644 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465666 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.466560 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.489275 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.508478 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.524516 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.552573 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568442 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568540 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.591100 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.627874 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.640203 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.656294 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671197 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671245 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.774924 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.775035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.775063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.775100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.775143 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.878967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.879043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.879066 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.879090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.879104 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.974152 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:58:51.278953405 +0000 UTC Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982168 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982213 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982223 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982242 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982256 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.035039 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:51 crc kubenswrapper[4942]: E0218 19:17:51.035227 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.056149 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.084614 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085799 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085809 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085839 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.105257 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.116522 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.141157 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.166298 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.181561 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189058 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189112 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189123 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189147 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189160 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.197310 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.212569 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.228385 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.259002 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.273009 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.285077 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292373 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292407 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292419 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.312947 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395554 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395584 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.498957 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.499014 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.499025 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.499048 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.499065 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.601678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.602056 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.602065 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.602085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.602096 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705198 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705251 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705273 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705284 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808444 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808505 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808517 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808540 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808554 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911655 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911788 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.975183 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:15:30.302671854 +0000 UTC Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015349 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015434 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015457 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015510 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.035812 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.035812 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:52 crc kubenswrapper[4942]: E0218 19:17:52.036053 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:52 crc kubenswrapper[4942]: E0218 19:17:52.036179 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119173 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119189 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119226 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222360 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222395 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222424 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.321818 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/0.log" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324869 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324885 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324924 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.332131 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf" exitCode=1 Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.332205 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.333121 4942 scope.go:117] "RemoveContainer" containerID="34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.352589 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.365682 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.383025 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.397149 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.414793 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.428947 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.429016 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.429034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.429060 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.429079 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.441002 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:51Z\\\",\\\"message\\\":\\\"roup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 19:17:51.651321 6254 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:51.651348 6254 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:17:51.651395 6254 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nF0218 19:17:51.651417 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.461378 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.476451 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.498091 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.520909 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532189 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532227 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532257 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.536715 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.548221 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.558688 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.574750 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635710 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635724 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635744 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635755 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738691 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738701 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738724 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738739 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.841856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.842302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.842383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.842467 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.842562 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.877689 4942 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945702 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945789 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945813 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945832 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.975379 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:45:22.663782011 +0000 UTC Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.035950 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:53 crc kubenswrapper[4942]: E0218 19:17:53.036140 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049203 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049265 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049289 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049304 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152169 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152243 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152319 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.254949 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.254998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.255016 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.255043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.255054 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.336608 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/0.log" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.339024 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.339564 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.354815 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357641 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357690 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357726 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.370954 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.386525 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.397122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.409020 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.424548 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.448213 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460448 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460458 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.478173 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:51Z\\\",\\\"message\\\":\\\"roup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 19:17:51.651321 6254 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:51.651348 6254 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:17:51.651395 6254 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nF0218 19:17:51.651417 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.494310 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.507681 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.522369 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.537731 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.552549 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563598 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563630 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563642 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.565172 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666196 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666797 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666897 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769899 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769913 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.872617 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.872897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.872959 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.873021 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.873085 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.975949 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:25:47.144854735 +0000 UTC Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977661 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977796 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977816 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.035695 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:54 crc kubenswrapper[4942]: E0218 19:17:54.035877 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.036078 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:54 crc kubenswrapper[4942]: E0218 19:17:54.036317 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.081934 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.082039 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.082057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.082078 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.082092 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186129 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186173 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186190 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186223 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289608 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289655 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.345726 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/1.log" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.347044 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/0.log" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.352019 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" exitCode=1 Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.352104 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.352375 4942 scope.go:117] "RemoveContainer" containerID="34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.353243 4942 scope.go:117] "RemoveContainer" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" Feb 18 19:17:54 crc kubenswrapper[4942]: E0218 19:17:54.353667 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.372231 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392607 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392720 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392733 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392755 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392795 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.396728 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.413496 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.427722 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.445679 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.461467 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.496538 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:51Z\\\",\\\"message\\\":\\\"roup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 19:17:51.651321 6254 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:51.651348 6254 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:17:51.651395 6254 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nF0218 19:17:51.651417 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.496900 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.496971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.496992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.497023 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.497043 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.520902 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.542065 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.565905 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.579740 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.599519 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.599659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.600054 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.600066 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.600084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.600098 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.616646 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.628078 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.704531 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.704836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.704930 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.705007 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.705067 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.715140 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z"] Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.716244 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.719882 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.721500 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.739263 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.756095 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.774199 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.777522 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.777625 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.777685 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.777711 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zxvs\" (UniqueName: \"kubernetes.io/projected/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-kube-api-access-2zxvs\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.789911 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807537 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807556 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807567 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.813498 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.836065 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:51Z\\\",\\\"message\\\":\\\"roup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 19:17:51.651321 6254 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:51.651348 6254 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:17:51.651395 6254 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nF0218 19:17:51.651417 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.851738 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.869286 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.878799 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.878845 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zxvs\" (UniqueName: \"kubernetes.io/projected/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-kube-api-access-2zxvs\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.878867 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.878918 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.879577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.879693 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.887332 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.889463 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.900024 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.901262 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zxvs\" (UniqueName: \"kubernetes.io/projected/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-kube-api-access-2zxvs\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909586 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909624 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909638 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.920092 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.936176 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.953179 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.976412 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:01:52.301371042 +0000 UTC Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.976710 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.997205 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.012910 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.013108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.013193 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.013331 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.013415 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.031259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.035649 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:55 crc kubenswrapper[4942]: E0218 19:17:55.035845 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:55 crc kubenswrapper[4942]: W0218 19:17:55.051082 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f8b40cd_7bbd_4189_a8c0_f4131e8b9add.slice/crio-07903325e930473a2cb750b610573cffd86b8c58f8b6f3b67e6a0cf63c6bfca4 WatchSource:0}: Error finding container 07903325e930473a2cb750b610573cffd86b8c58f8b6f3b67e6a0cf63c6bfca4: Status 404 returned error can't find the container with id 07903325e930473a2cb750b610573cffd86b8c58f8b6f3b67e6a0cf63c6bfca4 Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118282 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118404 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118426 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118450 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221386 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221494 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221521 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221551 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221570 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.326093 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.326837 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.326992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.327139 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.327298 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.366518 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/1.log" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.372206 4942 scope.go:117] "RemoveContainer" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" Feb 18 19:17:55 crc kubenswrapper[4942]: E0218 19:17:55.372517 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.378801 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" event={"ID":"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add","Type":"ContainerStarted","Data":"b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.378863 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" event={"ID":"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add","Type":"ContainerStarted","Data":"07903325e930473a2cb750b610573cffd86b8c58f8b6f3b67e6a0cf63c6bfca4"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.394446 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.424321 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430321 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430397 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430673 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.438225 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.458105 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.476203 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.494154 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.510060 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.523256 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533782 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533813 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.543265 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.568213 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.588941 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.607826 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.620730 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.632986 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637151 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637270 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637335 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.648312 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.739832 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.740115 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.740193 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.740277 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.740388 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843560 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843614 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843660 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946349 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946400 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946414 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946449 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.977722 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:15:03.423602074 +0000 UTC Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.035803 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.035832 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.035935 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.036051 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049286 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049296 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049329 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.152947 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.152998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.153008 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.153026 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.153035 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.214456 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qwg6q"] Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.214981 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.215046 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.233669 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.251912 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256458 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256636 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.270085 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.283665 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.294370 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmmq\" (UniqueName: \"kubernetes.io/projected/ac5b5f40-34db-4aeb-abb4-57204673bd53-kube-api-access-7kmmq\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.294527 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.304117 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.321236 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.339598 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.355724 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359847 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359888 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359899 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359921 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359933 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.373993 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.384259 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" event={"ID":"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add","Type":"ContainerStarted","Data":"3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.393502 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.395116 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kmmq\" (UniqueName: \"kubernetes.io/projected/ac5b5f40-34db-4aeb-abb4-57204673bd53-kube-api-access-7kmmq\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.395272 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.395531 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.395646 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:56.895617768 +0000 UTC m=+36.600550433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.419122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.445561 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kmmq\" (UniqueName: \"kubernetes.io/projected/ac5b5f40-34db-4aeb-abb4-57204673bd53-kube-api-access-7kmmq\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.459924 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.474973 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.475032 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.475051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.475083 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.475098 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.488425 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.503476 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.516922 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.527377 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.543071 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.557645 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.572684 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578154 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578169 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578179 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.591632 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.605434 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.619818 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.635671 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.648445 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.661305 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.677920 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680684 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680720 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680731 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.698966 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.715223 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.729201 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.740662 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.755232 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.777016 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.783907 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.783954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.783967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.783990 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.784005 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886592 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886674 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886721 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.899742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.899945 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.900015 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.899996405 +0000 UTC m=+37.604929070 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.978201 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:56:32.730918974 +0000 UTC Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982167 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982208 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:56.999910 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.004872 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.004961 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.004980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.005010 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.005029 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.027407 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032628 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032663 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.035170 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.035329 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.055581 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060769 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060783 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060800 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060812 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.079621 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084554 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084710 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084973 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.100434 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.101016 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103370 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103449 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103466 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103497 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206635 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206715 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206794 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206815 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.311929 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.311987 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.312004 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.312034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.312052 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414860 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414904 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414916 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414937 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414951 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517934 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517976 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517991 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621812 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621917 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621943 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621962 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.708250 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.708308 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708553 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708559 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708635 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708657 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708580 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708747 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.70871734 +0000 UTC m=+53.413650045 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708783 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708882 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.708865873 +0000 UTC m=+53.413798578 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725112 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725144 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725170 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827576 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827631 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827672 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827690 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.910071 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.910170 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910274 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910270 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.910232867 +0000 UTC m=+53.615165572 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910377 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:59.91035785 +0000 UTC m=+39.615290555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.910417 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.910476 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910566 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910630 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.910613546 +0000 UTC m=+53.615546231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910650 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910837 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.910803581 +0000 UTC m=+53.615736406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931439 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931531 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931584 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.979218 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:47:59.850819303 +0000 UTC Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034401 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034415 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.035122 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.035227 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.035124 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:58 crc kubenswrapper[4942]: E0218 19:17:58.035490 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:58 crc kubenswrapper[4942]: E0218 19:17:58.035246 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:17:58 crc kubenswrapper[4942]: E0218 19:17:58.035564 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136675 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136699 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136719 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239614 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239655 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239667 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239686 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239699 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342786 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342837 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342849 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342887 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446251 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446308 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446349 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446364 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549599 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549664 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549677 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549721 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653053 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653132 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653152 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653187 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653210 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756484 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860428 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860446 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860494 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.963938 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.964019 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.964034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.964053 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.964067 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.979395 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:34:28.370997943 +0000 UTC Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.035641 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:59 crc kubenswrapper[4942]: E0218 19:17:59.036479 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.036965 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070314 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070336 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070356 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.173905 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.174008 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.174025 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.174083 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.174099 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277235 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277669 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277699 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277719 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.380990 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.381057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.381078 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.381106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.381126 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.404039 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.408021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.408826 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.432802 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.451599 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.470164 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483810 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483875 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483895 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483943 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.486591 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.503590 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.518048 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.531726 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.545911 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.556973 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.571958 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587616 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587813 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587979 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.589504 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.606481 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.621429 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.637583 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.654329 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.681110 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690751 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690795 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690822 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690840 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793578 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793626 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793661 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896654 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896669 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896700 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.946136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:59 crc kubenswrapper[4942]: E0218 19:17:59.946312 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:59 crc kubenswrapper[4942]: E0218 19:17:59.946381 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:03.946364725 +0000 UTC m=+43.651297390 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.979917 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:27:27.07841183 +0000 UTC Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999469 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999478 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.035687 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.035826 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:00 crc kubenswrapper[4942]: E0218 19:18:00.035898 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.035844 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:00 crc kubenswrapper[4942]: E0218 19:18:00.036010 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:00 crc kubenswrapper[4942]: E0218 19:18:00.036120 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102146 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102200 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102217 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102254 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205471 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205535 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205552 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205807 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310445 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310497 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310507 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310530 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310542 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412742 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412807 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412819 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412847 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516515 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516563 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516571 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516600 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620270 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723457 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723523 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723545 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827203 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827293 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827323 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827337 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930509 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930530 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930541 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.980410 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:05:11.098544652 +0000 UTC Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.034993 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035168 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035307 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035361 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: E0218 19:18:01.035370 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.059849 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.080701 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.099826 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.123271 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139143 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139129 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139156 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.155733 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.172186 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.190077 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.205634 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.229878 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242081 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242229 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.248679 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.275022 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.294086 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.308992 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.322789 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.336910 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345024 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345107 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345141 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345170 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448343 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552135 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552163 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552182 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656560 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656608 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656638 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656650 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762636 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762888 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762965 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.868362 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.868817 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.868941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.869088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.869200 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971748 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971845 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.981187 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:17:37.004494459 +0000 UTC Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.035458 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.035485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.035485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:02 crc kubenswrapper[4942]: E0218 19:18:02.035679 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:02 crc kubenswrapper[4942]: E0218 19:18:02.035914 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:02 crc kubenswrapper[4942]: E0218 19:18:02.036029 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.075908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.075999 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.076073 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.076101 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.076151 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179264 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179837 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282536 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282592 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282608 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282632 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282648 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385596 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385724 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385798 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489216 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489344 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.592693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.593015 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.593035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.593061 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.593080 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696473 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696523 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696533 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696560 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696574 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.799950 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.800009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.800030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.800059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.800080 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905140 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905166 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905183 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.982012 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:13:31.966740158 +0000 UTC Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010452 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010516 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010534 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010560 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010580 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.035072 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:03 crc kubenswrapper[4942]: E0218 19:18:03.035242 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113820 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113876 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113896 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113921 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113940 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217641 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217659 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321548 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321652 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321679 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321717 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321749 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424921 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424939 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424965 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424984 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528356 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528380 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528417 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528440 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630668 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630747 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733734 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733751 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733777 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837316 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837385 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940728 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940741 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940778 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940793 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.982633 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:17:46.000892871 +0000 UTC Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.995670 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:03 crc kubenswrapper[4942]: E0218 19:18:03.995915 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:03 crc kubenswrapper[4942]: E0218 19:18:03.996009 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:11.995986193 +0000 UTC m=+51.700918888 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.035876 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.035940 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.035894 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:04 crc kubenswrapper[4942]: E0218 19:18:04.036099 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:04 crc kubenswrapper[4942]: E0218 19:18:04.036297 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:04 crc kubenswrapper[4942]: E0218 19:18:04.036562 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.044996 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.045067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.045089 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.045117 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.045137 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149529 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149704 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149740 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149799 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.252919 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.252989 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.253006 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.253032 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.253052 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356616 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356630 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356671 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461181 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461303 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461337 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461362 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.564846 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.565017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.565038 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.565069 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.565094 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668329 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668482 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668512 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771800 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771846 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771864 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876271 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876431 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980319 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980370 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.983230 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:07:16.234636081 +0000 UTC Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.035281 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:05 crc kubenswrapper[4942]: E0218 19:18:05.035504 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083683 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083714 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083728 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186511 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186596 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186618 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186668 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291248 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291271 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291324 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395406 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395512 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395535 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395564 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395583 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499524 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499804 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499834 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602685 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602704 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602718 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706220 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706361 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810408 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810426 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810475 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913160 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913256 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913266 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.983885 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:37:29.484712551 +0000 UTC Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.016917 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.016969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.016980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.017002 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.017015 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.035328 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:06 crc kubenswrapper[4942]: E0218 19:18:06.035458 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.035557 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.035568 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:06 crc kubenswrapper[4942]: E0218 19:18:06.035861 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:06 crc kubenswrapper[4942]: E0218 19:18:06.035998 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120187 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120279 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120318 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120342 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.223851 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.223938 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.223960 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.223985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.224005 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327860 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327912 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327939 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327950 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431348 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431377 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539349 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539402 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539416 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539616 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539664 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.643980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.644060 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.644079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.644102 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.644121 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747733 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747812 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850654 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850794 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850858 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.953900 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.953985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.954004 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.954034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.954052 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.984828 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:37:30.341159677 +0000 UTC Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.035845 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.036044 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.037121 4942 scope.go:117] "RemoveContainer" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057222 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057294 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057357 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.306838 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.307322 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.307348 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.307383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.307407 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.327825 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334093 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334227 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334289 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.356170 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362422 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362491 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362510 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362539 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362558 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.385952 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391701 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391780 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391790 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.407713 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.412807 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.413230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.413245 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.413262 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.413293 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.426721 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.426982 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431225 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431351 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431372 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.442393 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/1.log" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.446652 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.447453 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.466863 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.518807 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537289 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537324 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.546638 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.561055 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.574131 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.590481 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.605454 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.621946 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.633063 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639753 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639781 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639791 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.643592 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.656709 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.673622 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.695328 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.711910 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.728739 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742139 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742203 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742215 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742236 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742250 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.744292 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844556 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844602 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844612 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844629 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844641 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947357 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947376 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947387 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.985908 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:11:39.798510575 +0000 UTC Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.035682 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.035789 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.035701 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:08 crc kubenswrapper[4942]: E0218 19:18:08.035967 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:08 crc kubenswrapper[4942]: E0218 19:18:08.036220 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:08 crc kubenswrapper[4942]: E0218 19:18:08.036449 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050345 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050364 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050411 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153505 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153526 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153539 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256089 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256104 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359357 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359430 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359448 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359477 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359498 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.460076 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/2.log" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.461244 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/1.log" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462509 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462574 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462593 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462646 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.466396 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" exitCode=1 Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.466441 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.466481 4942 scope.go:117] "RemoveContainer" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.467588 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:08 crc kubenswrapper[4942]: E0218 19:18:08.467961 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.487243 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.502485 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.527107 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.546034 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565413 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565457 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565468 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565502 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.570279 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.605289 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.629933 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.650650 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668056 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668124 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668171 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668191 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.670886 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.696405 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.715987 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.733074 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.750538 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.770653 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771211 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771223 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771257 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.787184 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.801517 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874788 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874801 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874822 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874836 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978685 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978755 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978788 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978811 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978826 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.986374 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:21:25.35222522 +0000 UTC Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.035200 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:09 crc kubenswrapper[4942]: E0218 19:18:09.035412 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083305 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083332 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186021 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186082 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186123 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186139 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289650 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289726 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289746 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289802 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289827 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393401 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393431 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393450 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.473296 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/2.log" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.478557 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:09 crc kubenswrapper[4942]: E0218 19:18:09.478851 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496440 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496503 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496522 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496550 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496574 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.500282 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.516441 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.537895 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.558600 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.577063 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.596012 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600150 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600227 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600292 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.634879 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.656350 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.676133 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.700085 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707000 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707078 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707125 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.723096 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.741040 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.761995 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.783731 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.803509 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810296 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810373 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810437 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.821818 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914397 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914417 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914443 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914461 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.987498 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:35:45.531100897 +0000 UTC Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.017948 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.018018 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.018043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.018071 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.018089 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.035528 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.035632 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.035623 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:10 crc kubenswrapper[4942]: E0218 19:18:10.036023 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:10 crc kubenswrapper[4942]: E0218 19:18:10.036573 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:10 crc kubenswrapper[4942]: E0218 19:18:10.037183 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.121840 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.121928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.121955 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.121990 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.122014 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225215 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225285 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225328 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329320 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329345 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329378 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329401 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432396 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432448 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432467 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535121 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535190 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535247 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.638890 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.639363 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.639398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.639421 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.639435 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743407 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743481 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743498 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743525 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743546 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847701 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847792 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847805 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847824 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847837 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951730 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951793 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951810 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.988444 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:37:04.329019429 +0000 UTC Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.035328 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:11 crc kubenswrapper[4942]: E0218 19:18:11.035581 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055237 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055333 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055408 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055492 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.063117 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.085010 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.106618 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.124813 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.148023 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158366 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158428 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158473 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158492 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.166814 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.190971 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.228399 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.247592 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264634 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264722 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264806 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264828 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.268513 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.286407 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.303906 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.319677 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.336529 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.352927 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.367438 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.368926 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.369064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.369166 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.369264 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.369370 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473294 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473752 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473854 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578170 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578205 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578230 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682150 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682258 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682291 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682317 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785521 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785680 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.888885 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.888949 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.888968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.888993 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.889014 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.989005 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:11:58.571778786 +0000 UTC Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992606 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992746 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992879 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.035235 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.035344 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.035275 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.035500 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.035743 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.035987 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.065863 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.065997 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.066110 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:28.06607437 +0000 UTC m=+67.771007035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096201 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096279 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096359 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199758 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199865 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199888 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199916 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199938 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304018 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304128 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304166 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304194 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407772 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407819 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407853 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407864 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511162 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511171 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511203 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614677 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614809 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614898 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718345 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718436 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718483 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821500 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821524 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821577 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931168 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931305 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931340 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931376 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.990070 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:21:00.52788685 +0000 UTC Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034532 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034661 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.035206 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.035493 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.137928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.137995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.138013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.138041 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.138058 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242462 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242478 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242516 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.345866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.345945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.345963 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.345989 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.346009 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449546 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449677 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449701 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552487 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552558 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552583 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552637 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655701 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655742 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655755 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758223 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758317 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758331 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.785745 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.785910 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786132 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786200 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786203 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786264 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786281 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786224 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786357 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.786332437 +0000 UTC m=+85.491265112 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786396 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.786367498 +0000 UTC m=+85.491300193 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861643 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861691 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861726 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965376 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965394 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965422 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965439 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.988116 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988296 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.988257154 +0000 UTC m=+85.693189859 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.988448 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.988516 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988674 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988681 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988756 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.988737436 +0000 UTC m=+85.693670141 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988831 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.988817058 +0000 UTC m=+85.693749753 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.991053 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:06:01.556037871 +0000 UTC Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.035408 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.035484 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:14 crc kubenswrapper[4942]: E0218 19:18:14.035611 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.035659 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:14 crc kubenswrapper[4942]: E0218 19:18:14.035848 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:14 crc kubenswrapper[4942]: E0218 19:18:14.036125 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068799 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068851 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068892 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068909 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172096 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172149 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172160 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172195 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.275526 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.275971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.275995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.276027 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.276049 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.380972 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.381054 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.381075 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.381104 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.381126 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.485217 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.485948 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.486252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.486743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.487013 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.590339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.590752 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.590904 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.591016 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.591106 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.694856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.694934 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.694954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.694983 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.695002 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799236 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799375 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799396 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903370 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903458 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903562 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903585 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.991376 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:10:28.754119891 +0000 UTC Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.006908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.006980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.007006 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.007038 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.007060 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.035664 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:15 crc kubenswrapper[4942]: E0218 19:18:15.036011 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.109851 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.109931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.109955 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.109980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.110000 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222436 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222570 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325503 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325605 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325623 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.428897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.428995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.429013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.429038 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.429054 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.532751 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.533195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.533283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.533375 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.533450 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638328 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638341 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741672 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741702 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741794 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845386 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845441 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845459 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845484 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845501 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.859429 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.880688 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.897885 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.920881 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.941228 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.948472 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.948868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.949070 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.949221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.949343 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.971882 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.992508 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:10:11.174280786 +0000 UTC Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.992623 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.009748 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.033697 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.034860 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.034976 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:16 crc kubenswrapper[4942]: E0218 19:18:16.035045 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:16 crc kubenswrapper[4942]: E0218 19:18:16.035260 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.035299 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:16 crc kubenswrapper[4942]: E0218 19:18:16.035698 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053194 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053258 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053319 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053727 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.076069 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.095204 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.112723 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.133523 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.154044 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156340 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156358 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156402 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.172237 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.187909 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260331 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260378 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363123 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363203 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363254 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363291 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363313 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466352 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466364 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569446 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569526 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569549 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569581 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569599 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673328 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673433 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673484 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.776988 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.777058 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.777076 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.777104 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.777124 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880677 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880695 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880715 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880818 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984159 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984176 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984205 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984227 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.993446 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:46:55.567108899 +0000 UTC Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.035691 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.035895 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087890 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087914 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191164 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191248 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191290 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295185 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295240 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295260 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399027 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399096 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399114 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399138 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399155 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.502989 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.503063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.503084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.503114 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.503164 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586385 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586402 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586430 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586448 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.607879 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613621 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613718 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613740 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613803 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613833 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.635390 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640603 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640620 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640645 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640662 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.662980 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.665404 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669131 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669172 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669187 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669214 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669233 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.682254 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.685193 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.697589 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.701434 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.702971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.703004 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.703017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.703035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.703051 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.723430 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.723806 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.723941 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726694 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726754 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.744237 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.764302 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.778173 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.798227 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.813823 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.826741 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829389 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829411 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.850478 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.869469 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.886918 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.906087 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.923530 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932093 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932177 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.944673 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.983599 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.994533 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:09:12.68325088 +0000 UTC Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034748 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034812 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034824 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034844 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034858 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.035406 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.035521 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:18 crc kubenswrapper[4942]: E0218 19:18:18.035624 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.035831 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:18 crc kubenswrapper[4942]: E0218 19:18:18.035984 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:18 crc kubenswrapper[4942]: E0218 19:18:18.036337 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138341 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138414 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138429 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138452 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138466 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240755 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240846 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240864 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240889 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240908 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344172 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344218 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447316 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447379 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447424 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447445 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.549902 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.549965 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.549986 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.550017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.550039 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653467 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653542 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756567 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756661 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756709 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.859953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.860035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.860072 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.860099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.860124 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963926 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963974 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.995571 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:38:31.910584604 +0000 UTC Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.035390 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:19 crc kubenswrapper[4942]: E0218 19:18:19.035641 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067337 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067358 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067403 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.170987 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.171074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.171094 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.171119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.171139 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274162 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274179 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274228 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378286 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378336 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378354 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482060 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482133 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482151 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482200 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586137 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586216 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586258 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689453 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689532 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689548 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689570 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689583 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793351 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897135 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897160 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897197 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897225 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.996219 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:14:28.480010051 +0000 UTC Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000622 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000670 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000696 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000711 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.035287 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.035336 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.035464 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:20 crc kubenswrapper[4942]: E0218 19:18:20.035465 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:20 crc kubenswrapper[4942]: E0218 19:18:20.035677 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:20 crc kubenswrapper[4942]: E0218 19:18:20.035750 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.036424 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:20 crc kubenswrapper[4942]: E0218 19:18:20.036592 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104163 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104178 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208105 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208175 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208189 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208211 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208227 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311707 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311824 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311874 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415846 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415919 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415937 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415964 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415981 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.524257 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.524659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.525339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.525374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.525391 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628775 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628849 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628871 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628885 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.731985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.732047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.732059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.732079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.732094 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.835923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.835987 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.836013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.836046 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.836070 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939008 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939102 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939126 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939180 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.997348 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:32:28.419151421 +0000 UTC Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.035151 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:21 crc kubenswrapper[4942]: E0218 19:18:21.035347 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042376 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042462 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042478 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.058789 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.079670 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.101868 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.118921 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.137852 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145532 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145590 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145635 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145653 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.155293 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.182272 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.199866 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.220317 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.249486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.250085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.250395 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.250823 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.251501 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.255036 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.273563 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.292488 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.308553 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.324601 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.340111 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354826 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354843 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354886 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.356006 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.368667 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458434 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458477 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458489 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458511 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458524 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561666 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561685 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561740 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665675 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665732 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665751 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665816 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665842 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769414 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769949 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769984 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.872681 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.873051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.873122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.873192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.873259 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976717 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976847 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976875 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976896 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.998443 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:35:06.193859524 +0000 UTC Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.035111 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:22 crc kubenswrapper[4942]: E0218 19:18:22.035344 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.035646 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:22 crc kubenswrapper[4942]: E0218 19:18:22.035819 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.036044 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:22 crc kubenswrapper[4942]: E0218 19:18:22.036162 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.079968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.080044 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.080061 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.080089 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.080110 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183322 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183336 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286337 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286354 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286401 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389519 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389568 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389610 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492524 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492537 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492556 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492571 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595373 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595444 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595477 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698490 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698547 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698558 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698588 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803497 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803581 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803607 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803644 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803669 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908825 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908843 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908886 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:22.999694 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:05:05.627976294 +0000 UTC Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012942 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012983 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.035483 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:23 crc kubenswrapper[4942]: E0218 19:18:23.035644 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116735 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116814 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116833 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220605 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220655 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220678 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323698 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323819 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323845 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323863 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.426866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.426941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.426963 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.426992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.427012 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530299 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530344 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633329 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633413 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633460 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633484 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738259 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738537 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738576 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842661 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842674 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946918 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.000584 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:39:48.95829545 +0000 UTC Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.035435 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.035539 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.035640 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:24 crc kubenswrapper[4942]: E0218 19:18:24.035864 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:24 crc kubenswrapper[4942]: E0218 19:18:24.036067 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:24 crc kubenswrapper[4942]: E0218 19:18:24.036186 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051095 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051159 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051170 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051205 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155055 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155200 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.258913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.258993 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.259011 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.259037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.259054 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.362933 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.363003 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.363022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.363049 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.363067 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466877 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466930 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466947 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466991 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.569887 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.569968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.569985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.570013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.570037 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.672992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.673037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.673047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.673064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.673075 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775551 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775628 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775673 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879288 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879393 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879446 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879480 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879551 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984031 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984095 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984115 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984127 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.001715 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:37:48.663896929 +0000 UTC Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.035781 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:25 crc kubenswrapper[4942]: E0218 19:18:25.035992 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088715 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088850 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088907 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088928 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192353 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192451 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192506 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192536 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192559 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297475 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297592 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297618 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297650 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297672 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400684 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400704 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400742 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503841 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503900 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503912 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503930 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503942 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608809 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608832 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608860 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608880 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712056 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712121 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712203 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816039 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816095 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816110 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816142 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919152 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919206 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919251 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.002540 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:50:17.213514892 +0000 UTC Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021313 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021322 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021337 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021346 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.035397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.035397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:26 crc kubenswrapper[4942]: E0218 19:18:26.035530 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.035507 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:26 crc kubenswrapper[4942]: E0218 19:18:26.035592 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:26 crc kubenswrapper[4942]: E0218 19:18:26.035869 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123810 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123855 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123884 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123896 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227590 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227620 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227631 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330131 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432604 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432695 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432707 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432728 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432741 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535256 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535277 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535290 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638490 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638501 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638522 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638534 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740416 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740426 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740442 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740451 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843120 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843479 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843617 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843977 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946207 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946242 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.003329 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:17:41.7437808 +0000 UTC Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.034799 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:27 crc kubenswrapper[4942]: E0218 19:18:27.034929 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048645 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048710 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151202 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151215 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151243 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.253712 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.253864 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.253964 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.254050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.254140 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357797 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357821 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461116 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461125 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564125 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564270 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564327 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667120 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667157 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667177 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770236 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770247 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770279 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873330 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873439 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873467 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873486 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917848 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917896 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917940 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: E0218 19:18:27.935218 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946160 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946256 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946341 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: E0218 19:18:27.968713 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974001 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974082 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974133 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974155 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: E0218 19:18:27.991225 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996664 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996735 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996755 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996796 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.003799 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:57:37.290663388 +0000 UTC Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.010142 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014785 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014849 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014870 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014882 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.031246 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.031380 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033262 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033276 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033294 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033305 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.035999 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.036166 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.036429 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.036550 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.036985 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.037090 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.047899 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137722 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137824 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137848 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.154350 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.154517 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.154586 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:00.15456942 +0000 UTC m=+99.859502085 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240660 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240697 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240712 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344628 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344686 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344695 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344714 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344725 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448292 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448358 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448423 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551735 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551789 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551807 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655363 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655435 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655481 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655502 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758289 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758306 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758320 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862055 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862125 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862144 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862169 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862187 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.965977 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.966027 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.966037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.966057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.966065 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.004746 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:10:25.733891518 +0000 UTC Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.035434 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:29 crc kubenswrapper[4942]: E0218 19:18:29.035725 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068794 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068806 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068840 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171656 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171744 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171799 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171820 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274438 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274462 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274479 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377238 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377292 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377339 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480684 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480710 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480727 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583165 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583178 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583197 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583207 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.597787 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/0.log" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.597856 4942 generic.go:334] "Generic (PLEG): container finished" podID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" containerID="f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4" exitCode=1 Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.597901 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerDied","Data":"f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.598619 4942 scope.go:117] "RemoveContainer" containerID="f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.616466 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.631668 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.646918 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.658928 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.679870 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686536 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686549 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686568 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686583 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.704168 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.720122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.735583 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.751960 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.767939 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.779910 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789223 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789235 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789255 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789266 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789876 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.798400 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.808077 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.817848 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.826735 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.838162 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.848259 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891656 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891682 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891692 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.994961 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.995023 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.995042 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.995067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.995084 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.005267 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:48:56.204228142 +0000 UTC Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.035643 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.035690 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.035750 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:30 crc kubenswrapper[4942]: E0218 19:18:30.035856 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:30 crc kubenswrapper[4942]: E0218 19:18:30.036035 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:30 crc kubenswrapper[4942]: E0218 19:18:30.036183 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097933 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097952 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097964 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.200925 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.200972 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.200982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.200998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.201010 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303481 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303565 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303587 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303630 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406111 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406198 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406275 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509699 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509712 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509732 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509745 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.604113 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/0.log" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.604186 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617595 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617644 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617666 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617677 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.620015 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.662986 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.699841 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.713538 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719569 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719596 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719613 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.726671 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.739412 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.753856 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.768539 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.784660 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.799308 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.816841 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822020 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822049 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822058 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822086 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.832365 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.847418 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.864299 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.878675 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.898933 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.911122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925814 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925848 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.930353 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.005571 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:06:20.233793737 +0000 UTC Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029375 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029385 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029402 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029412 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.035150 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:31 crc kubenswrapper[4942]: E0218 19:18:31.035318 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.056207 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.074935 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.098734 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.122108 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132178 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132236 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132249 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132269 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132287 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.141130 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.156725 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.180550 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.195717 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.219036 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.233470 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235858 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235896 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235912 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.243614 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.258109 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.273444 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.287124 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.300004 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.316492 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.327693 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338689 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338728 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338754 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338807 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.340955 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442902 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442964 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442993 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545882 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545891 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545909 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545917 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649478 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649572 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752048 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752111 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752142 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855453 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855520 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855539 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855567 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855587 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.959954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.960043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.960074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.960097 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.960124 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.006878 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:20:59.567592936 +0000 UTC Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.035339 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.035376 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.035393 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:32 crc kubenswrapper[4942]: E0218 19:18:32.035659 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:32 crc kubenswrapper[4942]: E0218 19:18:32.035807 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:32 crc kubenswrapper[4942]: E0218 19:18:32.035947 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062878 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062896 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062917 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062931 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166055 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166116 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166127 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166147 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166161 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268733 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268810 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268826 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268838 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.371916 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.371969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.371980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.371999 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.372012 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.474953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.475119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.475130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.475146 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.475156 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578388 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578421 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578429 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578444 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578454 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680533 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680601 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680622 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680638 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783220 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783268 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885864 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885885 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885917 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885940 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.988915 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.989029 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.989049 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.989119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.989138 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.007175 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:32:09.515489795 +0000 UTC Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.035902 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:33 crc kubenswrapper[4942]: E0218 19:18:33.036189 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091250 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091294 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091307 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091338 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194201 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194212 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194248 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297228 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297284 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297317 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297328 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.400901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.400965 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.400978 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.401004 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.401020 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504015 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504070 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504093 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504105 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606865 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606946 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606972 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606991 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709493 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709515 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709546 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709568 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812459 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812520 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812595 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915329 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915366 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915386 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.007369 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:27:48.649414084 +0000 UTC Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018558 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018626 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018655 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018687 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018704 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.035007 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.035156 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.035243 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:34 crc kubenswrapper[4942]: E0218 19:18:34.035371 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:34 crc kubenswrapper[4942]: E0218 19:18:34.035278 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:34 crc kubenswrapper[4942]: E0218 19:18:34.035577 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122117 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122133 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122157 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122175 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224422 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224472 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224481 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224498 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327920 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327940 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327988 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431046 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431110 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431133 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431191 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431209 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534440 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534568 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534590 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534647 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638459 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638521 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638535 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638562 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638577 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742497 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742603 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742619 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.845991 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.846046 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.846058 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.846079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.846093 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948601 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948661 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948705 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948725 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.008547 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:25:00.679658309 +0000 UTC Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.035429 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:35 crc kubenswrapper[4942]: E0218 19:18:35.035987 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.036513 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050706 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050786 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050805 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050829 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050848 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153415 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153484 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153503 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153534 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153555 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256875 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256914 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256925 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256960 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360752 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360848 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360865 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360893 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360912 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465352 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465368 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465379 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567940 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567949 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567977 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.622290 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/2.log" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.624804 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.625385 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.641192 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.653648 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.670948 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.671012 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.671031 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.671057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.671076 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.673672 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.686289 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.701672 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.718182 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.734476 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.770406 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774170 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774210 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.783728 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.798906 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.829412 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.843454 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.853662 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.863698 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.874945 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876369 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876399 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876410 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876430 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876442 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.892052 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.906415 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.916658 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.979915 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.981598 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.981967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.982589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.982627 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.009680 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:37:13.517830815 +0000 UTC Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.035707 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.035856 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:36 crc kubenswrapper[4942]: E0218 19:18:36.036082 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.036112 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:36 crc kubenswrapper[4942]: E0218 19:18:36.037503 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:36 crc kubenswrapper[4942]: E0218 19:18:36.038479 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087303 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087421 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087435 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190823 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190840 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190883 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293406 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293413 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293430 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293439 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396191 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396262 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396279 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396323 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499354 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499408 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499421 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601832 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601884 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601915 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.631199 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.631844 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/2.log" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.634986 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" exitCode=1 Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.635033 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.635078 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.635708 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:18:36 crc kubenswrapper[4942]: E0218 19:18:36.635893 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.662696 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.683176 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.700534 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.705931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.705983 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.705993 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.706012 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.706027 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.722412 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.743060 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.763581 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.778452 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.794681 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809578 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809602 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809854 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.810274 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.828111 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.847300 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.864625 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.884417 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.909114 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:36Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041536 7014 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041160 7014 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041605 7014 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041616 7014 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 19:18:36.041625 7014 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 19:18:36.041636 7014 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.040597 7014 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913193 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913219 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913276 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.925522 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.940743 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.958457 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.972070 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.010491 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:02:46.102018936 +0000 UTC Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.015879 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.015948 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.015967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.015996 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.016018 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.035245 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:37 crc kubenswrapper[4942]: E0218 19:18:37.035404 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119140 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119169 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119188 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222341 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222425 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222450 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222469 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326827 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430481 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534141 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534258 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534278 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638320 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638441 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638458 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.643858 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.650614 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:18:37 crc kubenswrapper[4942]: E0218 19:18:37.650818 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.665840 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.686798 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.702959 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.724327 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.739646 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743473 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743632 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.762609 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.784381 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.801937 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.826984 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847387 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847410 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847423 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.850340 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.871709 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.891939 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.914820 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.934916 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950438 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950500 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950542 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950560 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.957158 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.989109 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:36Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041536 7014 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041160 7014 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041605 7014 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041616 7014 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 19:18:36.041625 7014 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 19:18:36.041636 7014 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.040597 7014 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.008842 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.010911 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:55:36.984697244 +0000 UTC Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.026911 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.035081 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.035216 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.035273 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.035083 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.035440 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.035687 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053576 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053605 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053662 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157076 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157171 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157198 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157260 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261172 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261254 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.354969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.355030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.355047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.355074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.355092 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.377845 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.383876 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.383963 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.383988 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.384022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.384047 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.403562 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408179 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408199 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.428623 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.433900 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.433975 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.434000 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.434036 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.434061 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.459472 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465554 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465669 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465693 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.483189 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.483409 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486362 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486477 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486559 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486595 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486700 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589823 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589890 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589969 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693498 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693628 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693697 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693718 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797539 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797595 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901450 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901471 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901496 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901515 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004838 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004906 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004950 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004968 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.012103 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:23:31.764992536 +0000 UTC Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.035904 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:39 crc kubenswrapper[4942]: E0218 19:18:39.036096 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108588 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108687 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108712 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108810 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212507 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212617 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212665 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316431 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316522 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316542 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316569 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316589 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419892 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419912 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419940 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419961 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.522996 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.523067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.523088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.523116 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.523137 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626402 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.728985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.729022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.729032 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.729048 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.729094 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832480 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832536 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832550 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832595 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832665 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935429 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935492 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.012803 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:34:37.902645373 +0000 UTC Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.035907 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.036023 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:40 crc kubenswrapper[4942]: E0218 19:18:40.036114 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:40 crc kubenswrapper[4942]: E0218 19:18:40.036238 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.036271 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:40 crc kubenswrapper[4942]: E0218 19:18:40.036402 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038391 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038411 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142314 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142401 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246155 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246301 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349429 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349496 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349513 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349537 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349555 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.452978 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.453043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.453057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.453077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.453092 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556612 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556705 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556794 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659554 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659617 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659636 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659684 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766319 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766417 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766440 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766466 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766485 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869455 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869559 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869628 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.973716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.973968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.974020 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.974052 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.974071 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.013545 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:23:03.663222231 +0000 UTC Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.035713 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:41 crc kubenswrapper[4942]: E0218 19:18:41.035983 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.056296 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.073880 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077627 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077670 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077718 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.090638 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.114979 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.128494 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.139132 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.155096 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.168379 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180833 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180852 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180884 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180901 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.183423 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.199856 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.217911 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.230334 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.251487 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.286884 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:36Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041536 7014 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041160 7014 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041605 7014 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041616 7014 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 19:18:36.041625 7014 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 19:18:36.041636 7014 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.040597 7014 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289397 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289449 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289547 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.307267 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.320726 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.332493 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.346787 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392706 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392744 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392775 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392796 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392810 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495140 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495216 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495240 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495300 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598075 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598084 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700752 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700910 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700933 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804594 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804664 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804811 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804902 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804932 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907662 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907889 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907908 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011131 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011207 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011226 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011255 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011275 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.014275 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:45:47.101671997 +0000 UTC Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.035297 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.035346 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.035359 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:42 crc kubenswrapper[4942]: E0218 19:18:42.035480 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:42 crc kubenswrapper[4942]: E0218 19:18:42.035691 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:42 crc kubenswrapper[4942]: E0218 19:18:42.035846 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114470 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114551 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114918 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114944 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218802 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218886 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218909 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218935 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218956 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322172 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322226 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322247 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425806 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425865 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425883 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425914 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425932 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528670 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528681 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528697 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528709 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631444 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631475 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734535 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734606 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734630 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734662 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734683 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837666 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837702 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837714 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940620 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940683 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.014499 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:55:10.006794158 +0000 UTC Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.034956 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:43 crc kubenswrapper[4942]: E0218 19:18:43.035537 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043151 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043179 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043227 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146523 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146601 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146670 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249681 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249718 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249862 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353606 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353673 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353741 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456825 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456962 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456980 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561407 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561428 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561455 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561478 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664852 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664867 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768516 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768641 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871632 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871813 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871847 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975818 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975882 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975898 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975941 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.014993 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:20:37.033363042 +0000 UTC Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.035469 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.035512 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.035607 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:44 crc kubenswrapper[4942]: E0218 19:18:44.035740 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:44 crc kubenswrapper[4942]: E0218 19:18:44.035937 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:44 crc kubenswrapper[4942]: E0218 19:18:44.036086 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079351 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079397 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182826 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182838 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182857 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182873 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288319 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288335 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288359 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391270 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391402 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495151 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495168 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495219 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598335 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598423 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598451 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598468 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701498 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701663 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805215 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805286 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805354 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910698 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910785 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015188 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015212 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015182 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:34:16.992575745 +0000 UTC Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015240 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015278 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.036024 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.036194 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.118689 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.119178 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.119331 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.119649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.119833 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223237 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223299 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223317 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223365 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326585 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326660 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326723 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430366 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430400 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430422 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533331 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533347 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533371 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533426 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.636982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.637064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.637090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.637122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.637151 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740380 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740404 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740433 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740456 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843686 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843798 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.871413 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.871478 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871687 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871726 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871747 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871870 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.871846524 +0000 UTC m=+149.576779219 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871969 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871991 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.872005 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.872049 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.872035629 +0000 UTC m=+149.576968334 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947136 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947197 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.015463 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:17:59.529701696 +0000 UTC Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.035220 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.035292 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.035250 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.035466 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.035599 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.035800 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050139 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050190 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050215 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.074067 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074266 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.074226317 +0000 UTC m=+149.779159032 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.074437 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.074491 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074685 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074710 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074834 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.074804302 +0000 UTC m=+149.779736997 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074875 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.074855953 +0000 UTC m=+149.779788778 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153690 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153710 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257065 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257207 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257289 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361452 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361500 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464863 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464882 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464906 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464926 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568720 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568805 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674284 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674339 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777807 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777837 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881019 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881070 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881081 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881112 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.984748 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.984910 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.985253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.985307 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.985330 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.016147 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:12:13.511647487 +0000 UTC Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.035240 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:47 crc kubenswrapper[4942]: E0218 19:18:47.035435 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.088931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.088996 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.089015 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.089043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.089064 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193487 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193614 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193670 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297793 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297820 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297840 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.401890 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.401966 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.401985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.402011 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.402029 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.505658 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.505839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.505873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.505952 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.506045 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.609956 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.610050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.610075 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.610103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.610135 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714202 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714251 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817409 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817426 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921320 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921345 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921363 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.016514 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:56:05.503592719 +0000 UTC Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024532 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024601 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024622 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024668 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.035201 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.035234 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.035245 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.035377 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.035519 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.035618 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128685 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128899 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128920 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231694 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231754 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231779 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231796 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231808 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335061 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335138 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335156 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335179 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335198 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439091 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439156 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439222 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439252 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544316 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544406 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544467 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544496 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647591 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647645 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647676 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.750995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.751067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.751084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.751111 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.751130 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.780988 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.781075 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.781098 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.781127 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.781148 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.802964 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808235 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808259 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808293 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808312 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.827365 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.831908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.831977 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.832002 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.832036 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.832060 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.853354 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859016 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859105 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859159 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.878417 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883784 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883840 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883859 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883872 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.904625 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.904941 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.906938 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.907000 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.907018 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.907043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.907062 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010672 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010691 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.016963 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:03:54.936474102 +0000 UTC Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.035725 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:49 crc kubenswrapper[4942]: E0218 19:18:49.035943 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113684 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113699 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113726 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113745 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217371 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217401 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217421 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320028 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320112 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320169 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423121 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423138 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423164 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423183 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525534 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525634 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525678 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629147 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629273 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629340 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629361 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733167 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733249 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733316 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837078 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837208 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837228 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941622 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941666 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941683 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.017971 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:15:33.134557562 +0000 UTC Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.035832 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.035939 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.035976 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:50 crc kubenswrapper[4942]: E0218 19:18:50.036042 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:50 crc kubenswrapper[4942]: E0218 19:18:50.036181 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:50 crc kubenswrapper[4942]: E0218 19:18:50.036907 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045187 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045245 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045264 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045311 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148664 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148727 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148816 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148844 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252286 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252330 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355391 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355416 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355435 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.458930 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.459021 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.459048 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.459077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.459094 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561911 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561936 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561995 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.665969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.666057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.666079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.666107 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.666125 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769591 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769699 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.872980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.873053 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.873074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.873100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.873118 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976722 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976814 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976835 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976878 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.018164 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:51:19.397192165 +0000 UTC Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.035030 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:51 crc kubenswrapper[4942]: E0218 19:18:51.035238 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.056517 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.077805 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080544 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080673 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080693 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.099677 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.115705 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.138122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.161584 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:36Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041536 7014 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041160 7014 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041605 7014 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041616 7014 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 19:18:36.041625 7014 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 19:18:36.041636 7014 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.040597 7014 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.179579 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183455 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183533 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183569 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.194170 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.206397 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.220994 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.233726 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.250546 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.266207 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.285372 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287164 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287250 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287317 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.305160 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.321358 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.354218 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.374277 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391132 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391208 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391259 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391279 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.494944 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.495020 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.495038 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.495063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.495083 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598390 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598438 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598451 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598469 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598479 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701682 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701736 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701791 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701804 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.804941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.805023 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.805033 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.805051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.805061 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908787 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908878 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908905 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908940 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908968 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012142 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012188 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012207 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.019312 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:07:06.214505962 +0000 UTC Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.034841 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:52 crc kubenswrapper[4942]: E0218 19:18:52.035041 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.035322 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.036016 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:52 crc kubenswrapper[4942]: E0218 19:18:52.036513 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.036658 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:18:52 crc kubenswrapper[4942]: E0218 19:18:52.036842 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:52 crc kubenswrapper[4942]: E0218 19:18:52.036911 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115082 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115165 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218586 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218641 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218654 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218691 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323102 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323125 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323171 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.425913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.425997 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.426017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.426050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.426070 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529409 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529421 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529439 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529456 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633483 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633513 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633531 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735730 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735829 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735843 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735882 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838850 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838893 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838910 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942414 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942440 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942483 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.020116 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:32:33.328447005 +0000 UTC Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.035490 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:53 crc kubenswrapper[4942]: E0218 19:18:53.035797 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045787 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045870 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045889 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045907 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045919 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149464 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149487 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149519 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149539 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251779 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251789 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251803 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251829 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355509 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355594 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355706 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460194 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460282 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.563929 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.563997 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.564022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.564054 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.564077 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667530 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667631 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667973 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667997 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771387 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771422 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771431 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771445 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771455 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875632 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875689 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875711 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978433 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978450 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978491 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.021169 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:15:13.277779715 +0000 UTC Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.035641 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.035645 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.035693 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:54 crc kubenswrapper[4942]: E0218 19:18:54.036212 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:54 crc kubenswrapper[4942]: E0218 19:18:54.036366 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:54 crc kubenswrapper[4942]: E0218 19:18:54.036533 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.061478 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081576 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081608 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081659 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185105 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185147 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185209 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287385 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287399 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287408 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391014 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391109 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391137 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391156 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494295 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494351 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494368 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494378 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597490 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597515 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597534 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700697 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700706 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700722 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700734 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803319 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803355 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803386 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907254 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907370 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907388 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010274 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010291 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010361 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.021755 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 15:41:27.826339754 +0000 UTC Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.035297 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:55 crc kubenswrapper[4942]: E0218 19:18:55.035495 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.113976 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.114050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.114079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.114113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.114138 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217273 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217434 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217458 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217584 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321424 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321443 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425540 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425660 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425679 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.528974 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.529059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.529100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.529130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.529150 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632190 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632256 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632274 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632318 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736449 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736470 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736520 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839309 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839332 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942475 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942556 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942586 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942643 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.022436 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:12:14.29956998 +0000 UTC Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.035128 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.035166 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.035137 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:56 crc kubenswrapper[4942]: E0218 19:18:56.035285 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:56 crc kubenswrapper[4942]: E0218 19:18:56.035427 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:56 crc kubenswrapper[4942]: E0218 19:18:56.035516 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046634 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046753 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046822 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149888 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149902 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149935 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253413 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253500 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253561 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356407 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356526 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356574 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460269 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460405 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564143 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564257 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564282 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666889 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666914 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666947 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666974 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770538 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770614 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770632 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770658 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770676 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.873929 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.874007 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.874025 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.874055 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.874077 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977533 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977551 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977598 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.023077 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:28:44.002423172 +0000 UTC Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.035660 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:57 crc kubenswrapper[4942]: E0218 19:18:57.036074 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.079894 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.079945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.079955 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.080009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.080026 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183163 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183186 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183212 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183227 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286143 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286156 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.388957 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.389027 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.389045 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.389072 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.389093 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.492852 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.492934 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.492954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.492982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.493003 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596132 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596213 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596251 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596276 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596292 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699415 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699424 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699441 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699452 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802740 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802784 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.905901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.906045 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.906067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.906092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.906112 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009929 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.023255 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:50:03.136425909 +0000 UTC Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.034954 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.034984 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.034984 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:58 crc kubenswrapper[4942]: E0218 19:18:58.035228 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:58 crc kubenswrapper[4942]: E0218 19:18:58.035388 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:58 crc kubenswrapper[4942]: E0218 19:18:58.035524 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113220 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113258 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113284 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216636 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.319857 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.319958 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.319981 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.320013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.320033 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.423945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.424037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.424062 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.424094 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.424114 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.526999 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.527056 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.527068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.527085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.527095 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.630847 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.630913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.630931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.630986 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.631011 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733451 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733468 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733509 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836101 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836174 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836194 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836220 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836240 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939184 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939282 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939335 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.023954 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:13:26.227928965 +0000 UTC Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.035215 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:59 crc kubenswrapper[4942]: E0218 19:18:59.035407 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043207 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043274 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043313 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043330 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146066 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146135 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146176 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249320 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249389 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249442 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249465 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254842 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254915 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254936 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254952 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.330865 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6"] Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.331706 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.336917 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.337110 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.337636 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.337955 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.415449 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podStartSLOduration=78.415413135 podStartE2EDuration="1m18.415413135s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.397808026 +0000 UTC m=+99.102740711" watchObservedRunningTime="2026-02-18 19:18:59.415413135 +0000 UTC m=+99.120345840" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.415892 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8jfwb" podStartSLOduration=78.415880057 podStartE2EDuration="1m18.415880057s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.415301742 +0000 UTC m=+99.120234407" watchObservedRunningTime="2026-02-18 19:18:59.415880057 +0000 UTC m=+99.120812762" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.439715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.439825 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3683112-fff6-4df6-ae06-4a3c78a76e5b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.439873 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.439935 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3683112-fff6-4df6-ae06-4a3c78a76e5b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.440255 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3683112-fff6-4df6-ae06-4a3c78a76e5b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.466368 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.466327804 podStartE2EDuration="42.466327804s" podCreationTimestamp="2026-02-18 19:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.465897783 +0000 UTC m=+99.170830518" watchObservedRunningTime="2026-02-18 19:18:59.466327804 +0000 UTC m=+99.171260559" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.482935 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.482897947 podStartE2EDuration="31.482897947s" podCreationTimestamp="2026-02-18 19:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.482190008 +0000 UTC m=+99.187122713" watchObservedRunningTime="2026-02-18 19:18:59.482897947 +0000 UTC m=+99.187830652" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.541684 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.541908 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3683112-fff6-4df6-ae06-4a3c78a76e5b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.541919 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542081 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3683112-fff6-4df6-ae06-4a3c78a76e5b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542154 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3683112-fff6-4df6-ae06-4a3c78a76e5b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542205 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542367 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542397 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.542380559 podStartE2EDuration="1m17.542380559s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.542185354 +0000 UTC m=+99.247118029" watchObservedRunningTime="2026-02-18 19:18:59.542380559 +0000 UTC m=+99.247313234" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.543623 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3683112-fff6-4df6-ae06-4a3c78a76e5b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.556085 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3683112-fff6-4df6-ae06-4a3c78a76e5b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.571649 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3683112-fff6-4df6-ae06-4a3c78a76e5b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.589215 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wxck8" podStartSLOduration=78.589189441 podStartE2EDuration="1m18.589189441s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.576089369 +0000 UTC m=+99.281022064" watchObservedRunningTime="2026-02-18 19:18:59.589189441 +0000 UTC m=+99.294122106" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.619744 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.619718068 podStartE2EDuration="1m18.619718068s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.604236484 +0000 UTC m=+99.309169149" watchObservedRunningTime="2026-02-18 19:18:59.619718068 +0000 UTC m=+99.324650733" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.649024 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" podStartSLOduration=77.648998451 podStartE2EDuration="1m17.648998451s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.647502442 +0000 UTC m=+99.352435117" watchObservedRunningTime="2026-02-18 19:18:59.648998451 +0000 UTC m=+99.353931126" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.659113 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.737565 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" event={"ID":"a3683112-fff6-4df6-ae06-4a3c78a76e5b","Type":"ContainerStarted","Data":"fde2b99c4c2f44221db1343ac6ac41994c84d5f4f2c86b6a5bc822771e5444c4"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.764373 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" podStartSLOduration=78.764353222 podStartE2EDuration="1m18.764353222s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.67692547 +0000 UTC m=+99.381858165" watchObservedRunningTime="2026-02-18 19:18:59.764353222 +0000 UTC m=+99.469285887" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.764654 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.764651319 podStartE2EDuration="5.764651319s" podCreationTimestamp="2026-02-18 19:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.760004268 +0000 UTC m=+99.464936943" watchObservedRunningTime="2026-02-18 19:18:59.764651319 +0000 UTC m=+99.469583974" Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.024625 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:48:36.087469496 +0000 UTC Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.024740 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.034819 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.034866 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.034995 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.035050 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.035205 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.035549 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.037258 4942 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.254441 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.254705 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.255065 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:04.255042899 +0000 UTC m=+163.959975564 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.743383 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" event={"ID":"a3683112-fff6-4df6-ae06-4a3c78a76e5b","Type":"ContainerStarted","Data":"acb325a963abc0e7eec844a8cb08d5a92f1d633a5a4fbae8dd5db4a3c4328286"} Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.769647 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5pgvt" podStartSLOduration=80.76962313 podStartE2EDuration="1m20.76962313s" podCreationTimestamp="2026-02-18 19:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.791693485 +0000 UTC m=+99.496626160" watchObservedRunningTime="2026-02-18 19:19:00.76962313 +0000 UTC m=+100.474555795" Feb 18 19:19:01 crc kubenswrapper[4942]: I0218 19:19:01.035245 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:01 crc kubenswrapper[4942]: E0218 19:19:01.036375 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:02 crc kubenswrapper[4942]: I0218 19:19:02.035794 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:02 crc kubenswrapper[4942]: E0218 19:19:02.036735 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:02 crc kubenswrapper[4942]: I0218 19:19:02.035937 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:02 crc kubenswrapper[4942]: I0218 19:19:02.035810 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:02 crc kubenswrapper[4942]: E0218 19:19:02.037139 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:02 crc kubenswrapper[4942]: E0218 19:19:02.037401 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:03 crc kubenswrapper[4942]: I0218 19:19:03.035903 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:03 crc kubenswrapper[4942]: E0218 19:19:03.036088 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:04 crc kubenswrapper[4942]: I0218 19:19:04.035088 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:04 crc kubenswrapper[4942]: I0218 19:19:04.035171 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:04 crc kubenswrapper[4942]: E0218 19:19:04.035259 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:04 crc kubenswrapper[4942]: E0218 19:19:04.035383 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:04 crc kubenswrapper[4942]: I0218 19:19:04.035971 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:04 crc kubenswrapper[4942]: E0218 19:19:04.036896 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:05 crc kubenswrapper[4942]: I0218 19:19:05.035816 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:05 crc kubenswrapper[4942]: E0218 19:19:05.036006 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:06 crc kubenswrapper[4942]: I0218 19:19:06.035583 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:06 crc kubenswrapper[4942]: I0218 19:19:06.036319 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:19:06 crc kubenswrapper[4942]: I0218 19:19:06.036041 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:06 crc kubenswrapper[4942]: E0218 19:19:06.036526 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:19:06 crc kubenswrapper[4942]: E0218 19:19:06.036521 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:06 crc kubenswrapper[4942]: I0218 19:19:06.036044 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:06 crc kubenswrapper[4942]: E0218 19:19:06.036822 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:06 crc kubenswrapper[4942]: E0218 19:19:06.037548 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:07 crc kubenswrapper[4942]: I0218 19:19:07.035284 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:07 crc kubenswrapper[4942]: E0218 19:19:07.035466 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:08 crc kubenswrapper[4942]: I0218 19:19:08.035525 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:08 crc kubenswrapper[4942]: I0218 19:19:08.035597 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:08 crc kubenswrapper[4942]: I0218 19:19:08.035637 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:08 crc kubenswrapper[4942]: E0218 19:19:08.035861 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:08 crc kubenswrapper[4942]: E0218 19:19:08.036151 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:08 crc kubenswrapper[4942]: E0218 19:19:08.036398 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:09 crc kubenswrapper[4942]: I0218 19:19:09.035639 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:09 crc kubenswrapper[4942]: E0218 19:19:09.035947 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:10 crc kubenswrapper[4942]: I0218 19:19:10.035520 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:10 crc kubenswrapper[4942]: I0218 19:19:10.035597 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:10 crc kubenswrapper[4942]: I0218 19:19:10.035543 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:10 crc kubenswrapper[4942]: E0218 19:19:10.035896 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:10 crc kubenswrapper[4942]: E0218 19:19:10.036117 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:10 crc kubenswrapper[4942]: E0218 19:19:10.036349 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:11 crc kubenswrapper[4942]: I0218 19:19:11.035053 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:11 crc kubenswrapper[4942]: E0218 19:19:11.036191 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:12 crc kubenswrapper[4942]: I0218 19:19:12.035404 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:12 crc kubenswrapper[4942]: I0218 19:19:12.035430 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:12 crc kubenswrapper[4942]: E0218 19:19:12.036170 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:12 crc kubenswrapper[4942]: I0218 19:19:12.035432 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:12 crc kubenswrapper[4942]: E0218 19:19:12.036289 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:12 crc kubenswrapper[4942]: E0218 19:19:12.036486 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:13 crc kubenswrapper[4942]: I0218 19:19:13.035931 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:13 crc kubenswrapper[4942]: E0218 19:19:13.036180 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:14 crc kubenswrapper[4942]: I0218 19:19:14.035910 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:14 crc kubenswrapper[4942]: E0218 19:19:14.036102 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:14 crc kubenswrapper[4942]: I0218 19:19:14.036372 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:14 crc kubenswrapper[4942]: I0218 19:19:14.036418 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:14 crc kubenswrapper[4942]: E0218 19:19:14.036509 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:14 crc kubenswrapper[4942]: E0218 19:19:14.036729 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.035115 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:15 crc kubenswrapper[4942]: E0218 19:19:15.035486 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.802841 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/1.log" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.803555 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/0.log" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.803606 4942 generic.go:334] "Generic (PLEG): container finished" podID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" containerID="4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46" exitCode=1 Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.803655 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerDied","Data":"4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46"} Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.803705 4942 scope.go:117] "RemoveContainer" containerID="f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.804368 4942 scope.go:117] "RemoveContainer" containerID="4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46" Feb 18 19:19:15 crc kubenswrapper[4942]: E0218 19:19:15.804952 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-8jfwb_openshift-multus(75150b8c-7a02-497b-86c3-eabc9c8dbc55)\"" pod="openshift-multus/multus-8jfwb" podUID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.838629 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" podStartSLOduration=94.838599352 podStartE2EDuration="1m34.838599352s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:00.769114746 +0000 UTC m=+100.474047481" watchObservedRunningTime="2026-02-18 19:19:15.838599352 +0000 UTC m=+115.543532057" Feb 18 19:19:16 crc kubenswrapper[4942]: I0218 19:19:16.035096 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:16 crc kubenswrapper[4942]: I0218 19:19:16.035151 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:16 crc kubenswrapper[4942]: I0218 19:19:16.035187 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:16 crc kubenswrapper[4942]: E0218 19:19:16.035339 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:16 crc kubenswrapper[4942]: E0218 19:19:16.035562 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:16 crc kubenswrapper[4942]: E0218 19:19:16.035667 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:16 crc kubenswrapper[4942]: I0218 19:19:16.809646 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/1.log" Feb 18 19:19:17 crc kubenswrapper[4942]: I0218 19:19:17.035665 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:17 crc kubenswrapper[4942]: E0218 19:19:17.035952 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:18 crc kubenswrapper[4942]: I0218 19:19:18.034972 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:18 crc kubenswrapper[4942]: I0218 19:19:18.034972 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:18 crc kubenswrapper[4942]: E0218 19:19:18.035160 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:18 crc kubenswrapper[4942]: E0218 19:19:18.035519 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:18 crc kubenswrapper[4942]: I0218 19:19:18.036259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:18 crc kubenswrapper[4942]: E0218 19:19:18.036535 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:19 crc kubenswrapper[4942]: I0218 19:19:19.035211 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:19 crc kubenswrapper[4942]: E0218 19:19:19.035391 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:20 crc kubenswrapper[4942]: I0218 19:19:20.035426 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:20 crc kubenswrapper[4942]: I0218 19:19:20.035566 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:20 crc kubenswrapper[4942]: I0218 19:19:20.035426 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:20 crc kubenswrapper[4942]: E0218 19:19:20.035666 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:20 crc kubenswrapper[4942]: E0218 19:19:20.035733 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:20 crc kubenswrapper[4942]: E0218 19:19:20.035872 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:20 crc kubenswrapper[4942]: E0218 19:19:20.983040 4942 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.036515 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:21 crc kubenswrapper[4942]: E0218 19:19:21.036878 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.038031 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:19:21 crc kubenswrapper[4942]: E0218 19:19:21.160803 4942 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.831508 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.834983 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.835364 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.870710 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podStartSLOduration=100.87067634900001 podStartE2EDuration="1m40.870676349s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:21.868276647 +0000 UTC m=+121.573209412" watchObservedRunningTime="2026-02-18 19:19:21.870676349 +0000 UTC m=+121.575609054" Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.035654 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.035706 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:22 crc kubenswrapper[4942]: E0218 19:19:22.035921 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.035969 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:22 crc kubenswrapper[4942]: E0218 19:19:22.036095 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:22 crc kubenswrapper[4942]: E0218 19:19:22.036246 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.106632 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qwg6q"] Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.839495 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:22 crc kubenswrapper[4942]: E0218 19:19:22.840804 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:23 crc kubenswrapper[4942]: I0218 19:19:23.035387 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:23 crc kubenswrapper[4942]: E0218 19:19:23.035632 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:24 crc kubenswrapper[4942]: I0218 19:19:24.035842 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:24 crc kubenswrapper[4942]: I0218 19:19:24.035910 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:24 crc kubenswrapper[4942]: I0218 19:19:24.035886 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:24 crc kubenswrapper[4942]: E0218 19:19:24.036064 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:24 crc kubenswrapper[4942]: E0218 19:19:24.036302 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:24 crc kubenswrapper[4942]: E0218 19:19:24.036409 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:25 crc kubenswrapper[4942]: I0218 19:19:25.035162 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:25 crc kubenswrapper[4942]: E0218 19:19:25.035388 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:26 crc kubenswrapper[4942]: I0218 19:19:26.035259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:26 crc kubenswrapper[4942]: I0218 19:19:26.035259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:26 crc kubenswrapper[4942]: E0218 19:19:26.035483 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:26 crc kubenswrapper[4942]: E0218 19:19:26.035556 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:26 crc kubenswrapper[4942]: I0218 19:19:26.036544 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:26 crc kubenswrapper[4942]: E0218 19:19:26.036940 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:26 crc kubenswrapper[4942]: E0218 19:19:26.162833 4942 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:19:27 crc kubenswrapper[4942]: I0218 19:19:27.035497 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:27 crc kubenswrapper[4942]: E0218 19:19:27.035839 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:27 crc kubenswrapper[4942]: I0218 19:19:27.036606 4942 scope.go:117] "RemoveContainer" containerID="4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46" Feb 18 19:19:27 crc kubenswrapper[4942]: I0218 19:19:27.861705 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/1.log" Feb 18 19:19:27 crc kubenswrapper[4942]: I0218 19:19:27.861842 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958"} Feb 18 19:19:28 crc kubenswrapper[4942]: I0218 19:19:28.035206 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:28 crc kubenswrapper[4942]: I0218 19:19:28.035285 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:28 crc kubenswrapper[4942]: I0218 19:19:28.035342 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:28 crc kubenswrapper[4942]: E0218 19:19:28.035372 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:28 crc kubenswrapper[4942]: E0218 19:19:28.035594 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:28 crc kubenswrapper[4942]: E0218 19:19:28.036091 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:29 crc kubenswrapper[4942]: I0218 19:19:29.035871 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:29 crc kubenswrapper[4942]: E0218 19:19:29.036081 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:30 crc kubenswrapper[4942]: I0218 19:19:30.035787 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:30 crc kubenswrapper[4942]: I0218 19:19:30.035808 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:30 crc kubenswrapper[4942]: E0218 19:19:30.035970 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:30 crc kubenswrapper[4942]: I0218 19:19:30.035808 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:30 crc kubenswrapper[4942]: E0218 19:19:30.036144 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:30 crc kubenswrapper[4942]: E0218 19:19:30.036242 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:31 crc kubenswrapper[4942]: I0218 19:19:31.034937 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:31 crc kubenswrapper[4942]: E0218 19:19:31.036957 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.035449 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.035616 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.035637 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.040547 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042403 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042466 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042652 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042713 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042658 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:19:33 crc kubenswrapper[4942]: I0218 19:19:33.035829 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:39 crc kubenswrapper[4942]: I0218 19:19:39.916283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 19:19:39 crc kubenswrapper[4942]: I0218 19:19:39.992646 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd"] Feb 18 19:19:39 crc kubenswrapper[4942]: I0218 19:19:39.993418 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:39 crc kubenswrapper[4942]: I0218 19:19:39.999651 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4pmfw"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.000283 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.001016 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.001573 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.001723 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.003402 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bd7zz"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.004322 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.005283 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.006013 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.006565 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tndhs"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.007203 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.023172 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.023172 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.027199 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.029538 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.030206 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.034005 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036325 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036482 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036545 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036557 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036627 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036645 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036972 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037356 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037508 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037649 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037543 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037861 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037514 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.040274 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041153 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041253 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041281 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041307 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041327 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041330 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041378 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041267 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041445 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042463 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042588 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042696 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042790 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042826 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042934 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.043185 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.043331 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.043671 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.044485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.045279 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.045332 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.052727 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.056029 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.058737 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.061260 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.059155 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-trusted-ca\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.074034 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077549 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077630 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077663 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-serving-cert\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077700 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077729 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077806 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077840 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077869 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgsp\" (UniqueName: \"kubernetes.io/projected/4afc5765-32dc-4b49-b1a3-9141c2c96087-kube-api-access-mvgsp\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077896 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-config\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077946 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077973 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078006 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078044 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078074 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-config\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078106 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cckls\" (UniqueName: \"kubernetes.io/projected/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-kube-api-access-cckls\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078136 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078163 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-service-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078197 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d941adf-0c5e-46d6-9a7c-a7677468f322-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078230 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlvr4\" (UniqueName: \"kubernetes.io/projected/cb8403e3-f9b3-4ddf-8688-1a025a2b9291-kube-api-access-rlvr4\") pod \"downloads-7954f5f757-tndhs\" (UID: \"cb8403e3-f9b3-4ddf-8688-1a025a2b9291\") " pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4afc5765-32dc-4b49-b1a3-9141c2c96087-serving-cert\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078332 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078361 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078397 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078427 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d941adf-0c5e-46d6-9a7c-a7677468f322-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078474 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chfv2\" (UniqueName: \"kubernetes.io/projected/0d941adf-0c5e-46d6-9a7c-a7677468f322-kube-api-access-chfv2\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078506 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.082064 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v5w2k"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.083080 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.086366 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.087181 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.087360 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.088123 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p42pr"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.088991 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.091495 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.091537 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.091638 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.092171 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.092487 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.092679 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.092714 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.093054 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.095187 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.095428 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.096294 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.096618 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.098935 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.099021 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.100105 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.100407 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.100752 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.100872 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.101153 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.102258 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.102889 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103109 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x5rln"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103139 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103235 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103331 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103406 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103479 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103553 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103630 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103720 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.104450 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.104692 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.104916 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105199 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105338 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105435 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105863 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105897 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106053 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106136 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106181 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106352 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106506 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106582 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106656 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106726 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106835 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.107829 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bgd6x"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.108575 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.122659 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.122991 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.123184 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.124532 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.124929 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.125204 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.125590 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.128452 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.129174 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.138702 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.138825 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.139268 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.151646 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.151703 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.151987 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.152103 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.152595 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.152820 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153114 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153159 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153307 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153405 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153438 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153405 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153542 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153587 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153682 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153791 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153882 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.154196 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.155031 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.155543 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.159518 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fgw8l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.160280 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.159531 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.161156 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.161193 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.160249 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.161281 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.162856 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.163560 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.164078 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.164396 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.164427 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.166894 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.167551 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.169845 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4pmfw"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.172210 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.173444 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.174092 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.176855 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179117 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-config\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179166 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-image-import-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179190 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-encryption-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179213 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179236 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-images\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179295 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179533 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179564 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4sb\" (UniqueName: \"kubernetes.io/projected/e3586689-cf81-4cd2-84d1-70b0ce221b9d-kube-api-access-kg4sb\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.180045 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.180434 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-config\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.180956 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181124 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181202 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181599 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181755 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zj44h"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181972 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.182266 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.182638 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9wcp7"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183003 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183264 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183529 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183654 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183792 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-node-pullsecrets\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.185205 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b488q"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.184269 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.185831 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.185980 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.186012 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.186227 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.186672 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.187139 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.190432 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.191151 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.191660 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tndhs"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.191724 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.191986 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.193869 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.194274 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-s57sd"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.195074 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.195525 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.195631 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.197429 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.200507 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p42pr"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.200545 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x5rln"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.200559 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bd7zz"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.202398 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203374 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-audit-policies\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203419 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-client\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203453 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/994be5c4-0c9d-4577-82e8-644d64c3ab1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203477 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbqqn\" (UniqueName: \"kubernetes.io/projected/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-kube-api-access-dbqqn\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203509 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203532 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-serving-cert\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203558 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-config\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203590 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cckls\" (UniqueName: \"kubernetes.io/projected/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-kube-api-access-cckls\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203612 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit-dir\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203635 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203661 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-service-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203685 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-auth-proxy-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203705 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203731 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d941adf-0c5e-46d6-9a7c-a7677468f322-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203753 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86fdeda0-1ae3-488d-9612-d633a5fca64f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203801 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlvr4\" (UniqueName: \"kubernetes.io/projected/cb8403e3-f9b3-4ddf-8688-1a025a2b9291-kube-api-access-rlvr4\") pod \"downloads-7954f5f757-tndhs\" (UID: \"cb8403e3-f9b3-4ddf-8688-1a025a2b9291\") " pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203822 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/86fdeda0-1ae3-488d-9612-d633a5fca64f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203845 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/709f9378-2d1c-4158-9521-e6000e06eb5e-machine-approver-tls\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203871 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203912 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4afc5765-32dc-4b49-b1a3-9141c2c96087-serving-cert\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203970 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203996 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204027 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e3586689-cf81-4cd2-84d1-70b0ce221b9d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204051 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-encryption-config\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204076 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204095 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204136 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48shh\" (UniqueName: \"kubernetes.io/projected/cbe755cf-b7a2-4557-9368-5d71df455408-kube-api-access-48shh\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204156 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204175 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204197 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-serving-cert\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204221 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d941adf-0c5e-46d6-9a7c-a7677468f322-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204247 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204264 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204282 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48a8b317-27eb-4d20-93ad-37fa559ec858-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204299 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272vp\" (UniqueName: \"kubernetes.io/projected/48a8b317-27eb-4d20-93ad-37fa559ec858-kube-api-access-272vp\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204319 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204336 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204355 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204382 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wfd2\" (UniqueName: \"kubernetes.io/projected/6890b7aa-fac3-4c00-90cc-4618ddfae25e-kube-api-access-5wfd2\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204408 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-client\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204425 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdjq\" (UniqueName: \"kubernetes.io/projected/709f9378-2d1c-4158-9521-e6000e06eb5e-kube-api-access-pcdjq\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204445 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204473 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chfv2\" (UniqueName: \"kubernetes.io/projected/0d941adf-0c5e-46d6-9a7c-a7677468f322-kube-api-access-chfv2\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204492 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204513 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae259edb-f577-48b8-b236-91656ac269d2-metrics-tls\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204533 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204554 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbe755cf-b7a2-4557-9368-5d71df455408-audit-dir\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204586 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-trusted-ca\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204619 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204643 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-serving-cert\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204689 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drnjv\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-kube-api-access-drnjv\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204717 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204740 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204780 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/994be5c4-0c9d-4577-82e8-644d64c3ab1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204803 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204827 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204853 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204878 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsz8q\" (UniqueName: \"kubernetes.io/projected/ae259edb-f577-48b8-b236-91656ac269d2-kube-api-access-rsz8q\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204901 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204923 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-config\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204950 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204971 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-serving-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.205066 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.205094 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgsp\" (UniqueName: \"kubernetes.io/projected/4afc5765-32dc-4b49-b1a3-9141c2c96087-kube-api-access-mvgsp\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.205116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994be5c4-0c9d-4577-82e8-644d64c3ab1d-config\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.209631 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-config\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.221508 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-service-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.222371 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d941adf-0c5e-46d6-9a7c-a7677468f322-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.225436 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.225497 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v5w2k"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.225451 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.225754 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.226427 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.228294 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.230064 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.230183 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.230726 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.230915 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.231599 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-serving-cert\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.231641 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.232686 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.233053 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4afc5765-32dc-4b49-b1a3-9141c2c96087-serving-cert\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.233492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-trusted-ca\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.234628 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.235579 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.236220 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.237103 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d941adf-0c5e-46d6-9a7c-a7677468f322-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.243100 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.244260 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.246017 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.247093 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bgd6x"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.248174 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.249635 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.250701 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.251814 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.253453 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.255309 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.255647 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-s4kjv"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.256615 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.257452 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w9lpz"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.260372 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.260498 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.260517 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.261203 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s57sd"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.263025 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.264342 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.266245 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.267195 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.270404 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zj44h"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.272163 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.273794 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.276422 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9wcp7"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.276454 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.277820 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.279191 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.280025 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.281089 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.282042 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b488q"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.283395 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-s4kjv"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.284520 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w9lpz"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.285770 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xs9jl"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.286398 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.295612 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.305969 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48shh\" (UniqueName: \"kubernetes.io/projected/cbe755cf-b7a2-4557-9368-5d71df455408-kube-api-access-48shh\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306005 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306023 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306040 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306058 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306073 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-serving-cert\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306092 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306111 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306225 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48a8b317-27eb-4d20-93ad-37fa559ec858-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306244 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272vp\" (UniqueName: \"kubernetes.io/projected/48a8b317-27eb-4d20-93ad-37fa559ec858-kube-api-access-272vp\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306259 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306274 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306292 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306306 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wfd2\" (UniqueName: \"kubernetes.io/projected/6890b7aa-fac3-4c00-90cc-4618ddfae25e-kube-api-access-5wfd2\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306323 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-client\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306338 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcdjq\" (UniqueName: \"kubernetes.io/projected/709f9378-2d1c-4158-9521-e6000e06eb5e-kube-api-access-pcdjq\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306352 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306366 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae259edb-f577-48b8-b236-91656ac269d2-metrics-tls\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306382 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306402 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbe755cf-b7a2-4557-9368-5d71df455408-audit-dir\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306436 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drnjv\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-kube-api-access-drnjv\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306452 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/994be5c4-0c9d-4577-82e8-644d64c3ab1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306467 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306484 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306507 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306525 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-config\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306543 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsz8q\" (UniqueName: \"kubernetes.io/projected/ae259edb-f577-48b8-b236-91656ac269d2-kube-api-access-rsz8q\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306562 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-serving-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306584 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306612 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994be5c4-0c9d-4577-82e8-644d64c3ab1d-config\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306630 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306645 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-images\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-image-import-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306687 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-encryption-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306709 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4sb\" (UniqueName: \"kubernetes.io/projected/e3586689-cf81-4cd2-84d1-70b0ce221b9d-kube-api-access-kg4sb\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306726 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306751 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306845 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-node-pullsecrets\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306788 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306789 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-node-pullsecrets\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306911 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-audit-policies\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306936 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-client\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306966 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/994be5c4-0c9d-4577-82e8-644d64c3ab1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306993 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbqqn\" (UniqueName: \"kubernetes.io/projected/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-kube-api-access-dbqqn\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307021 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-serving-cert\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307076 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit-dir\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307107 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307133 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-auth-proxy-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307163 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86fdeda0-1ae3-488d-9612-d633a5fca64f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307186 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/709f9378-2d1c-4158-9521-e6000e06eb5e-machine-approver-tls\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307207 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307243 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/86fdeda0-1ae3-488d-9612-d633a5fca64f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307312 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-encryption-config\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307342 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e3586689-cf81-4cd2-84d1-70b0ce221b9d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307361 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307858 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307857 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbe755cf-b7a2-4557-9368-5d71df455408-audit-dir\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.308442 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.308465 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.308502 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.309228 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-auth-proxy-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.309430 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-audit-policies\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.309650 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-serving-cert\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.309828 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994be5c4-0c9d-4577-82e8-644d64c3ab1d-config\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.310272 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.310309 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.310447 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-images\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.310724 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit-dir\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86fdeda0-1ae3-488d-9612-d633a5fca64f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311361 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311391 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311435 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-image-import-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311958 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-config\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-client\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.312017 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-serving-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.312182 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.312353 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-client\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.312597 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313126 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/994be5c4-0c9d-4577-82e8-644d64c3ab1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313160 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313324 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48a8b317-27eb-4d20-93ad-37fa559ec858-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313794 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/709f9378-2d1c-4158-9521-e6000e06eb5e-machine-approver-tls\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-encryption-config\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.314072 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-serving-cert\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.314098 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-encryption-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.314810 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e3586689-cf81-4cd2-84d1-70b0ce221b9d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.315406 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.315875 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.316031 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.316135 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.335853 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.355525 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.384371 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.416231 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.436841 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.445149 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae259edb-f577-48b8-b236-91656ac269d2-metrics-tls\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.455747 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.478107 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.495627 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.516554 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.536556 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.542266 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/86fdeda0-1ae3-488d-9612-d633a5fca64f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.556533 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.575888 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.597063 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.616450 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.636689 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.656332 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.676515 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.696879 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.737495 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.758604 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.777025 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.796882 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.816309 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.837411 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.856821 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.876836 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.897163 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.916719 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.937444 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.956341 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.976391 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.995921 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.016844 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.037233 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.057799 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.076921 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.096165 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.116733 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.136861 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.156601 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.176442 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.194341 4942 request.go:700] Waited for 1.012403879s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.196713 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.216885 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.236510 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.255727 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.275842 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.296953 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.316647 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.336347 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.356081 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.378124 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.397411 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.416390 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.436584 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.456250 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.476820 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.497512 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.516609 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.536217 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.557351 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.575961 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.597028 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.617103 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.636374 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.656875 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.689604 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.696802 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.716607 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.736497 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.757536 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.775806 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.797040 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.816907 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.836178 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.856455 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.903314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cckls\" (UniqueName: \"kubernetes.io/projected/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-kube-api-access-cckls\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.929258 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlvr4\" (UniqueName: \"kubernetes.io/projected/cb8403e3-f9b3-4ddf-8688-1a025a2b9291-kube-api-access-rlvr4\") pod \"downloads-7954f5f757-tndhs\" (UID: \"cb8403e3-f9b3-4ddf-8688-1a025a2b9291\") " pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.934065 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chfv2\" (UniqueName: \"kubernetes.io/projected/0d941adf-0c5e-46d6-9a7c-a7677468f322-kube-api-access-chfv2\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.960728 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgsp\" (UniqueName: \"kubernetes.io/projected/4afc5765-32dc-4b49-b1a3-9141c2c96087-kube-api-access-mvgsp\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.974971 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.975706 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.996474 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.016833 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.035899 4942 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.056719 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.076107 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.095991 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.116633 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.119659 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.135869 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.138082 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.161329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.182133 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48shh\" (UniqueName: \"kubernetes.io/projected/cbe755cf-b7a2-4557-9368-5d71df455408-kube-api-access-48shh\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.185240 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.193734 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.210067 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.214583 4942 request.go:700] Waited for 1.906630391s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.230576 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drnjv\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-kube-api-access-drnjv\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.271502 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.278351 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/994be5c4-0c9d-4577-82e8-644d64c3ab1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.291880 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcdjq\" (UniqueName: \"kubernetes.io/projected/709f9378-2d1c-4158-9521-e6000e06eb5e-kube-api-access-pcdjq\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.291909 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.305869 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272vp\" (UniqueName: \"kubernetes.io/projected/48a8b317-27eb-4d20-93ad-37fa559ec858-kube-api-access-272vp\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.324134 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.336894 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4sb\" (UniqueName: \"kubernetes.io/projected/e3586689-cf81-4cd2-84d1-70b0ce221b9d-kube-api-access-kg4sb\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.355167 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.359013 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsz8q\" (UniqueName: \"kubernetes.io/projected/ae259edb-f577-48b8-b236-91656ac269d2-kube-api-access-rsz8q\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.380270 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.387540 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbqqn\" (UniqueName: \"kubernetes.io/projected/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-kube-api-access-dbqqn\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.394673 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.402115 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wfd2\" (UniqueName: \"kubernetes.io/projected/6890b7aa-fac3-4c00-90cc-4618ddfae25e-kube-api-access-5wfd2\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.410394 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.420978 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.423468 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.435108 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.445958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.446059 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.446138 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.446166 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a79c946-4621-4b6d-af59-6b919d125502-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447193 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447279 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447357 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447423 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwzh\" (UniqueName: \"kubernetes.io/projected/9a79c946-4621-4b6d-af59-6b919d125502-kube-api-access-tmwzh\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447466 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447487 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447679 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448144 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448217 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-config\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448246 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448306 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448402 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7smb\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-kube-api-access-v7smb\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448720 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a79c946-4621-4b6d-af59-6b919d125502-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448820 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-serving-cert\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448943 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.449515 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:42.949482006 +0000 UTC m=+142.654414671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449564 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449607 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05aed8e4-390c-4589-8a61-2aab50a1d90f-trusted-ca\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449775 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-service-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449924 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-etcd-client\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.450002 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05aed8e4-390c-4589-8a61-2aab50a1d90f-metrics-tls\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.450047 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtlv7\" (UniqueName: \"kubernetes.io/projected/714a349f-4480-4467-9041-7cae31df7686-kube-api-access-vtlv7\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.452700 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:42 crc kubenswrapper[4942]: W0218 19:19:42.466123 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d941adf_0c5e_46d6_9a7c_a7677468f322.slice/crio-da475d0721a7d193057b00168df055f153446074dc5684b508d817f1c1d3fe48 WatchSource:0}: Error finding container da475d0721a7d193057b00168df055f153446074dc5684b508d817f1c1d3fe48: Status 404 returned error can't find the container with id da475d0721a7d193057b00168df055f153446074dc5684b508d817f1c1d3fe48 Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552360 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.552534 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.052510564 +0000 UTC m=+142.757443229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552564 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552593 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmrg\" (UniqueName: \"kubernetes.io/projected/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-kube-api-access-8wmrg\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552611 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-plugins-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552635 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552651 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j82p\" (UniqueName: \"kubernetes.io/projected/2407a935-a8b9-4894-baaf-7460fee3d22b-kube-api-access-8j82p\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552667 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-key\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552681 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8134898c-a265-4fa0-8548-075ea0812b7b-service-ca-bundle\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552699 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8e73114-6ccf-40ba-94e8-437e2db303fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552725 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7smb\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-kube-api-access-v7smb\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a79c946-4621-4b6d-af59-6b919d125502-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553208 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgftz\" (UniqueName: \"kubernetes.io/projected/0e51d4dc-e813-4166-bb6a-45d083a09d2a-kube-api-access-wgftz\") pod \"migrator-59844c95c7-9grql\" (UID: \"0e51d4dc-e813-4166-bb6a-45d083a09d2a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553261 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz8wn\" (UniqueName: \"kubernetes.io/projected/af99a6af-5df3-4b87-8f14-a564c5d86164-kube-api-access-cz8wn\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553308 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553334 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vg5\" (UniqueName: \"kubernetes.io/projected/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-kube-api-access-88vg5\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553355 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-certs\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553402 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-service-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553815 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553909 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48r9\" (UniqueName: \"kubernetes.io/projected/ae63de17-3438-46ed-94f9-5f51d8a216fd-kube-api-access-d48r9\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554020 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554049 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrch7\" (UniqueName: \"kubernetes.io/projected/696bcbdd-c9ca-45cd-ae12-e733919e2832-kube-api-access-qrch7\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554079 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554102 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554123 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl889\" (UniqueName: \"kubernetes.io/projected/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-kube-api-access-jl889\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554194 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-csi-data-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554323 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-service-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554345 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554617 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-metrics-certs\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554735 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a79c946-4621-4b6d-af59-6b919d125502-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554780 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554831 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h4qr\" (UniqueName: \"kubernetes.io/projected/51ed31a1-9bf0-40ff-8bca-041d691662b4-kube-api-access-4h4qr\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554948 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157afc1c-f5df-419b-a760-336d14bbbd6d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555053 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555095 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555194 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwzh\" (UniqueName: \"kubernetes.io/projected/9a79c946-4621-4b6d-af59-6b919d125502-kube-api-access-tmwzh\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555227 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a79c946-4621-4b6d-af59-6b919d125502-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555278 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrbc\" (UniqueName: \"kubernetes.io/projected/8134898c-a265-4fa0-8548-075ea0812b7b-kube-api-access-pnrbc\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555459 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e73114-6ccf-40ba-94e8-437e2db303fb-config\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555511 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555535 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmcm\" (UniqueName: \"kubernetes.io/projected/0ec933ee-8c36-49a0-8ba5-c7442f4de367-kube-api-access-dfmcm\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555641 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlq6p\" (UniqueName: \"kubernetes.io/projected/461a0658-ae3b-4972-8122-2719276793b9-kube-api-access-rlq6p\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556232 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-default-certificate\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556369 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-webhook-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556427 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8e73114-6ccf-40ba-94e8-437e2db303fb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556490 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-apiservice-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556506 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-config\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556523 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2407a935-a8b9-4894-baaf-7460fee3d22b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556566 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-serving-cert\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556602 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/461a0658-ae3b-4972-8122-2719276793b9-metrics-tls\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556682 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05aed8e4-390c-4589-8a61-2aab50a1d90f-trusted-ca\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556945 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ea138c-808d-40ee-9e77-2435676f7fba-cert\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.556970 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.056959174 +0000 UTC m=+142.761891839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556987 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-srv-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557249 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-serving-cert\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557277 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-node-bootstrap-token\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557337 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/157afc1c-f5df-419b-a760-336d14bbbd6d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557364 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557426 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-srv-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557448 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-profile-collector-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557690 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-images\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-socket-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557750 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-244x4\" (UniqueName: \"kubernetes.io/projected/bb96ca2b-27a4-42e3-af7f-3514321500a3-kube-api-access-244x4\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560032 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c8ec50-d07e-4c96-80b8-22cf232b015c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560090 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560114 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-etcd-client\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560138 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-stats-auth\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560158 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2407a935-a8b9-4894-baaf-7460fee3d22b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560194 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05aed8e4-390c-4589-8a61-2aab50a1d90f-metrics-tls\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560226 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtlv7\" (UniqueName: \"kubernetes.io/projected/714a349f-4480-4467-9041-7cae31df7686-kube-api-access-vtlv7\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560248 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51ed31a1-9bf0-40ff-8bca-041d691662b4-proxy-tls\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560272 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtdlh\" (UniqueName: \"kubernetes.io/projected/67ea138c-808d-40ee-9e77-2435676f7fba-kube-api-access-mtdlh\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560294 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6lxh\" (UniqueName: \"kubernetes.io/projected/83c8ec50-d07e-4c96-80b8-22cf232b015c-kube-api-access-g6lxh\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560321 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.559441 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-serving-cert\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.559598 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a79c946-4621-4b6d-af59-6b919d125502-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560412 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.562599 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05aed8e4-390c-4589-8a61-2aab50a1d90f-trusted-ca\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.562639 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.564946 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565094 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/461a0658-ae3b-4972-8122-2719276793b9-config-volume\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565120 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-mountpoint-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565148 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-tmpfs\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565167 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6cc\" (UniqueName: \"kubernetes.io/projected/a873b689-a8f1-4125-b97c-e9d0f6b06397-kube-api-access-rv6cc\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565217 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565238 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-cabundle\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565817 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb96ca2b-27a4-42e3-af7f-3514321500a3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565985 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566021 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157afc1c-f5df-419b-a760-336d14bbbd6d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566221 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-proxy-tls\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566254 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-registration-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566748 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566799 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566865 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh7v2\" (UniqueName: \"kubernetes.io/projected/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-kube-api-access-wh7v2\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.567407 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569021 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569294 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569510 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569701 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569815 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-config\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.570421 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-config\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.571063 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05aed8e4-390c-4589-8a61-2aab50a1d90f-metrics-tls\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.571452 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.572912 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.573372 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.576889 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-etcd-client\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.585740 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.588992 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.601721 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7smb\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-kube-api-access-v7smb\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.626446 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.631400 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwzh\" (UniqueName: \"kubernetes.io/projected/9a79c946-4621-4b6d-af59-6b919d125502-kube-api-access-tmwzh\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.671171 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.671371 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.171318967 +0000 UTC m=+142.876251632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672245 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-stats-auth\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672279 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2407a935-a8b9-4894-baaf-7460fee3d22b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672310 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51ed31a1-9bf0-40ff-8bca-041d691662b4-proxy-tls\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672332 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtdlh\" (UniqueName: \"kubernetes.io/projected/67ea138c-808d-40ee-9e77-2435676f7fba-kube-api-access-mtdlh\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672353 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6lxh\" (UniqueName: \"kubernetes.io/projected/83c8ec50-d07e-4c96-80b8-22cf232b015c-kube-api-access-g6lxh\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672393 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/461a0658-ae3b-4972-8122-2719276793b9-config-volume\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672417 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-tmpfs\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672437 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-mountpoint-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv6cc\" (UniqueName: \"kubernetes.io/projected/a873b689-a8f1-4125-b97c-e9d0f6b06397-kube-api-access-rv6cc\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672480 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-cabundle\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672502 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb96ca2b-27a4-42e3-af7f-3514321500a3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672533 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672553 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-registration-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672574 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157afc1c-f5df-419b-a760-336d14bbbd6d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672595 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-proxy-tls\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672621 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh7v2\" (UniqueName: \"kubernetes.io/projected/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-kube-api-access-wh7v2\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672664 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672690 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wmrg\" (UniqueName: \"kubernetes.io/projected/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-kube-api-access-8wmrg\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672710 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-plugins-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672732 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j82p\" (UniqueName: \"kubernetes.io/projected/2407a935-a8b9-4894-baaf-7460fee3d22b-kube-api-access-8j82p\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.673512 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-registration-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674008 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8e73114-6ccf-40ba-94e8-437e2db303fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674048 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-key\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674071 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8134898c-a265-4fa0-8548-075ea0812b7b-service-ca-bundle\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674103 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz8wn\" (UniqueName: \"kubernetes.io/projected/af99a6af-5df3-4b87-8f14-a564c5d86164-kube-api-access-cz8wn\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674128 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgftz\" (UniqueName: \"kubernetes.io/projected/0e51d4dc-e813-4166-bb6a-45d083a09d2a-kube-api-access-wgftz\") pod \"migrator-59844c95c7-9grql\" (UID: \"0e51d4dc-e813-4166-bb6a-45d083a09d2a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674151 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vg5\" (UniqueName: \"kubernetes.io/projected/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-kube-api-access-88vg5\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674182 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-certs\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674207 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674251 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48r9\" (UniqueName: \"kubernetes.io/projected/ae63de17-3438-46ed-94f9-5f51d8a216fd-kube-api-access-d48r9\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674684 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674727 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674752 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrch7\" (UniqueName: \"kubernetes.io/projected/696bcbdd-c9ca-45cd-ae12-e733919e2832-kube-api-access-qrch7\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674788 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674815 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl889\" (UniqueName: \"kubernetes.io/projected/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-kube-api-access-jl889\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674862 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-csi-data-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674889 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-metrics-certs\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674941 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h4qr\" (UniqueName: \"kubernetes.io/projected/51ed31a1-9bf0-40ff-8bca-041d691662b4-kube-api-access-4h4qr\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674971 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157afc1c-f5df-419b-a760-336d14bbbd6d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674996 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675019 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675044 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrbc\" (UniqueName: \"kubernetes.io/projected/8134898c-a265-4fa0-8548-075ea0812b7b-kube-api-access-pnrbc\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675069 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e73114-6ccf-40ba-94e8-437e2db303fb-config\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675091 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmcm\" (UniqueName: \"kubernetes.io/projected/0ec933ee-8c36-49a0-8ba5-c7442f4de367-kube-api-access-dfmcm\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675113 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlq6p\" (UniqueName: \"kubernetes.io/projected/461a0658-ae3b-4972-8122-2719276793b9-kube-api-access-rlq6p\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675168 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-default-certificate\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675188 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-webhook-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675310 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8e73114-6ccf-40ba-94e8-437e2db303fb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675340 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-config\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675376 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2407a935-a8b9-4894-baaf-7460fee3d22b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675403 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-apiservice-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675425 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/461a0658-ae3b-4972-8122-2719276793b9-metrics-tls\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675480 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ea138c-808d-40ee-9e77-2435676f7fba-cert\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/461a0658-ae3b-4972-8122-2719276793b9-config-volume\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.676028 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.676411 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-tmpfs\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674969 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-plugins-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.677218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8134898c-a265-4fa0-8548-075ea0812b7b-service-ca-bundle\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.678181 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.678354 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-csi-data-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.678881 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb96ca2b-27a4-42e3-af7f-3514321500a3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679200 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2407a935-a8b9-4894-baaf-7460fee3d22b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679171 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679465 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-cabundle\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679587 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-proxy-tls\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675501 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-srv-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.680290 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-serving-cert\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.680343 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-node-bootstrap-token\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.680501 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/157afc1c-f5df-419b-a760-336d14bbbd6d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681188 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681236 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-srv-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681262 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-profile-collector-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681296 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-images\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681318 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-socket-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681345 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-244x4\" (UniqueName: \"kubernetes.io/projected/bb96ca2b-27a4-42e3-af7f-3514321500a3-kube-api-access-244x4\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681370 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c8ec50-d07e-4c96-80b8-22cf232b015c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681523 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8e73114-6ccf-40ba-94e8-437e2db303fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.682034 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-stats-auth\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.682870 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-key\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.683003 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-certs\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.684143 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.184126532 +0000 UTC m=+142.889059277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.684386 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-metrics-certs\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.684549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-config\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.684902 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.686197 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51ed31a1-9bf0-40ff-8bca-041d691662b4-proxy-tls\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.687646 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.690188 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-serving-cert\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.690310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ea138c-808d-40ee-9e77-2435676f7fba-cert\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.690483 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-default-certificate\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.691295 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2407a935-a8b9-4894-baaf-7460fee3d22b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.692170 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-srv-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.692238 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-node-bootstrap-token\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.693399 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c8ec50-d07e-4c96-80b8-22cf232b015c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.693971 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.694934 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtlv7\" (UniqueName: \"kubernetes.io/projected/714a349f-4480-4467-9041-7cae31df7686-kube-api-access-vtlv7\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.697435 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.697707 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-socket-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.699033 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/461a0658-ae3b-4972-8122-2719276793b9-metrics-tls\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.699850 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-images\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.700120 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e73114-6ccf-40ba-94e8-437e2db303fb-config\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.700880 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-apiservice-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.704996 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.706505 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157afc1c-f5df-419b-a760-336d14bbbd6d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.706750 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-mountpoint-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.708641 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-srv-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.709267 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-profile-collector-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.709379 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.710046 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-webhook-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.710157 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/157afc1c-f5df-419b-a760-336d14bbbd6d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.729278 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.736721 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.748041 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.754174 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4pmfw"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.755080 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.774535 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bd7zz"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.780695 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh7v2\" (UniqueName: \"kubernetes.io/projected/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-kube-api-access-wh7v2\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.786152 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.787142 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.287117859 +0000 UTC m=+142.992050524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.794841 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.817528 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157afc1c-f5df-419b-a760-336d14bbbd6d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.833573 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtdlh\" (UniqueName: \"kubernetes.io/projected/67ea138c-808d-40ee-9e77-2435676f7fba-kube-api-access-mtdlh\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.837834 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.840615 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.846496 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tndhs"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.854541 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv6cc\" (UniqueName: \"kubernetes.io/projected/a873b689-a8f1-4125-b97c-e9d0f6b06397-kube-api-access-rv6cc\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.886636 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j82p\" (UniqueName: \"kubernetes.io/projected/2407a935-a8b9-4894-baaf-7460fee3d22b-kube-api-access-8j82p\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.890559 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.890897 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.390883997 +0000 UTC m=+143.095816652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.916935 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bgd6x"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.926694 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.928959 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl889\" (UniqueName: \"kubernetes.io/projected/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-kube-api-access-jl889\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.932264 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrch7\" (UniqueName: \"kubernetes.io/projected/696bcbdd-c9ca-45cd-ae12-e733919e2832-kube-api-access-qrch7\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.936038 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" event={"ID":"4afc5765-32dc-4b49-b1a3-9141c2c96087","Type":"ContainerStarted","Data":"0bd508beedecb19783ef3701b1d63010adfb4748bf36f05ad2a53d349bbaec15"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.936371 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.937419 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wmrg\" (UniqueName: \"kubernetes.io/projected/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-kube-api-access-8wmrg\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.940073 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" event={"ID":"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29","Type":"ContainerStarted","Data":"be0e2983f257f89983a034d639679ba41911d38165ae858e7dff29e94b7347e8"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.940105 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" event={"ID":"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29","Type":"ContainerStarted","Data":"7752848ea47de612256d0efd19571c026f62196ff350d11480f295cbf89a9d21"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.942585 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.947483 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" event={"ID":"709f9378-2d1c-4158-9521-e6000e06eb5e","Type":"ContainerStarted","Data":"70fafaa1b92bcbda4edb91c0b2cb05438ffafad32fa2fec58890b4c7a238677b"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.947572 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" event={"ID":"709f9378-2d1c-4158-9521-e6000e06eb5e","Type":"ContainerStarted","Data":"ce1c08e5daa508491ee69a9555f342277221c9ed36bf4ab05789d1efc230b58e"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.951340 4942 patch_prober.go:28] interesting pod/console-operator-58897d9998-4pmfw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.951411 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" podUID="5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.951922 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" event={"ID":"0d941adf-0c5e-46d6-9a7c-a7677468f322","Type":"ContainerStarted","Data":"094fb475778879f3d2db25d5800580f383e0308a54494a858cf3ebc47d46f656"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.951968 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" event={"ID":"0d941adf-0c5e-46d6-9a7c-a7677468f322","Type":"ContainerStarted","Data":"da475d0721a7d193057b00168df055f153446074dc5684b508d817f1c1d3fe48"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.953090 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tndhs" event={"ID":"cb8403e3-f9b3-4ddf-8688-1a025a2b9291","Type":"ContainerStarted","Data":"e1995a8aaae8e1fb6ac760957cee590a7081bcdd015d1c6948be8dd9b3e47eeb"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.955719 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" event={"ID":"42dda107-038c-42c1-8182-52bee75caea9","Type":"ContainerStarted","Data":"549a45966f3465b915ee762043425f7fc34d780e5d763266b632f538fe2cd88e"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.958205 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h4qr\" (UniqueName: \"kubernetes.io/projected/51ed31a1-9bf0-40ff-8bca-041d691662b4-kube-api-access-4h4qr\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.965267 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: W0218 19:19:42.972163 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae259edb_f577_48b8_b236_91656ac269d2.slice/crio-e5df29d48d25aa3ea435cdc1318c970ef57f723e811a569ec499280ebf2c8923 WatchSource:0}: Error finding container e5df29d48d25aa3ea435cdc1318c970ef57f723e811a569ec499280ebf2c8923: Status 404 returned error can't find the container with id e5df29d48d25aa3ea435cdc1318c970ef57f723e811a569ec499280ebf2c8923 Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.974791 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6lxh\" (UniqueName: \"kubernetes.io/projected/83c8ec50-d07e-4c96-80b8-22cf232b015c-kube-api-access-g6lxh\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.992522 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.992976 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.49295654 +0000 UTC m=+143.197889205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.995294 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlq6p\" (UniqueName: \"kubernetes.io/projected/461a0658-ae3b-4972-8122-2719276793b9-kube-api-access-rlq6p\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.002188 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.014244 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.021593 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.022161 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8e73114-6ccf-40ba-94e8-437e2db303fb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.040213 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.042187 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.060512 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.068138 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-244x4\" (UniqueName: \"kubernetes.io/projected/bb96ca2b-27a4-42e3-af7f-3514321500a3-kube-api-access-244x4\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.073786 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmcm\" (UniqueName: \"kubernetes.io/projected/0ec933ee-8c36-49a0-8ba5-c7442f4de367-kube-api-access-dfmcm\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.080702 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.090089 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.095568 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.097128 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgftz\" (UniqueName: \"kubernetes.io/projected/0e51d4dc-e813-4166-bb6a-45d083a09d2a-kube-api-access-wgftz\") pod \"migrator-59844c95c7-9grql\" (UID: \"0e51d4dc-e813-4166-bb6a-45d083a09d2a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.097136 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.597120138 +0000 UTC m=+143.302052923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.100330 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vg5\" (UniqueName: \"kubernetes.io/projected/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-kube-api-access-88vg5\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.103409 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:43 crc kubenswrapper[4942]: W0218 19:19:43.103914 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbe755cf_b7a2_4557_9368_5d71df455408.slice/crio-2d9ae5080d8a0911d435f2cee044bba81aa7a5e14f68397f7975e962531d6193 WatchSource:0}: Error finding container 2d9ae5080d8a0911d435f2cee044bba81aa7a5e14f68397f7975e962531d6193: Status 404 returned error can't find the container with id 2d9ae5080d8a0911d435f2cee044bba81aa7a5e14f68397f7975e962531d6193 Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.114488 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48r9\" (UniqueName: \"kubernetes.io/projected/ae63de17-3438-46ed-94f9-5f51d8a216fd-kube-api-access-d48r9\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.118915 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.118972 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x5rln"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.118984 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.123887 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.134702 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.153360 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.163600 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz8wn\" (UniqueName: \"kubernetes.io/projected/af99a6af-5df3-4b87-8f14-a564c5d86164-kube-api-access-cz8wn\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.164743 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.171634 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.182833 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.185558 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.186275 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.189116 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p42pr"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.203716 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.199697 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.200709 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v5w2k"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.196241 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.199787 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.699754096 +0000 UTC m=+143.404686761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.204519 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.204524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrbc\" (UniqueName: \"kubernetes.io/projected/8134898c-a265-4fa0-8548-075ea0812b7b-kube-api-access-pnrbc\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.207404 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.707378332 +0000 UTC m=+143.412310997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.210673 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.215271 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.223243 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.244149 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.264802 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.272303 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.314911 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.315213 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.815193659 +0000 UTC m=+143.520126324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.368358 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.379313 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.416352 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.416728 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.916716086 +0000 UTC m=+143.621648751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.517594 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.519427 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.018220963 +0000 UTC m=+143.723153628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.606489 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.621428 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.621880 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.121860638 +0000 UTC m=+143.826793293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.632056 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s57sd"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.720882 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" podStartSLOduration=122.720861777 podStartE2EDuration="2m2.720861777s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:43.719997274 +0000 UTC m=+143.424929939" watchObservedRunningTime="2026-02-18 19:19:43.720861777 +0000 UTC m=+143.425794442" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.722048 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" podStartSLOduration=122.722038569 podStartE2EDuration="2m2.722038569s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:43.675688569 +0000 UTC m=+143.380621234" watchObservedRunningTime="2026-02-18 19:19:43.722038569 +0000 UTC m=+143.426971234" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.724958 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.725068 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.22505474 +0000 UTC m=+143.929987405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.725278 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.725541 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.225533973 +0000 UTC m=+143.930466638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.748526 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.757670 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.813617 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p"] Feb 18 19:19:43 crc kubenswrapper[4942]: W0218 19:19:43.819288 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67ea138c_808d_40ee_9e77_2435676f7fba.slice/crio-295d840e2d2cbcd9fed3e4010b0664d7b5d4e6da4e446024f4f143ed6ae594bc WatchSource:0}: Error finding container 295d840e2d2cbcd9fed3e4010b0664d7b5d4e6da4e446024f4f143ed6ae594bc: Status 404 returned error can't find the container with id 295d840e2d2cbcd9fed3e4010b0664d7b5d4e6da4e446024f4f143ed6ae594bc Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.826270 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.826689 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.32666499 +0000 UTC m=+144.031597655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.838167 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.857748 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.858935 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z"] Feb 18 19:19:43 crc kubenswrapper[4942]: W0218 19:19:43.917967 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d6ad520_b407_4b86_867b_9e9658bfa536.slice/crio-561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c WatchSource:0}: Error finding container 561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c: Status 404 returned error can't find the container with id 561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.928123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.928569 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.428552108 +0000 UTC m=+144.133484773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.968539 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" event={"ID":"efab374b-fec3-4b4e-81f1-002715812a67","Type":"ContainerStarted","Data":"3c276811f364fb83706109331be8399abc2c7a535cfd237e4abe3dc07119fee5"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.975667 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" event={"ID":"42dda107-038c-42c1-8182-52bee75caea9","Type":"ContainerStarted","Data":"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.976248 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.980058 4942 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kpfjc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.980095 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.982375 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" event={"ID":"48a8b317-27eb-4d20-93ad-37fa559ec858","Type":"ContainerStarted","Data":"2def3982ef8fbe6c1791a93b229c90af6eb468fdba16041dec0a2cea286b5b3e"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.989272 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" event={"ID":"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae","Type":"ContainerStarted","Data":"4368118890a7a63201ed7f0bb9d641c1a985ddd14f6b2f95c6e0fe5ff3b25845"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.998366 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" event={"ID":"86fdeda0-1ae3-488d-9612-d633a5fca64f","Type":"ContainerStarted","Data":"3e45dbfff30834bdc3ea5cfa860f4a71f2e183e6f4466624047f4e79c5f4c782"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.998414 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" event={"ID":"86fdeda0-1ae3-488d-9612-d633a5fca64f","Type":"ContainerStarted","Data":"c6fb470a9f7dd52043826f7743424c4910d5221d27151e50832dbc21d2c68477"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.029200 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.029720 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.529699185 +0000 UTC m=+144.234631850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.071833 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" event={"ID":"ae259edb-f577-48b8-b236-91656ac269d2","Type":"ContainerStarted","Data":"6e49fe8a89e451498bdcab8bc2c5d3c214682dd0957f2a2a6d828c820f095390"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.071882 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" event={"ID":"ae259edb-f577-48b8-b236-91656ac269d2","Type":"ContainerStarted","Data":"e5df29d48d25aa3ea435cdc1318c970ef57f723e811a569ec499280ebf2c8923"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.081212 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zj44h"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.084598 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.089014 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.097641 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" event={"ID":"6890b7aa-fac3-4c00-90cc-4618ddfae25e","Type":"ContainerStarted","Data":"9dc5c5908b962492b879d1c2f73708683ee59282a9e3326436550f837bfce9de"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.104191 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.118766 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.119461 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" event={"ID":"05aed8e4-390c-4589-8a61-2aab50a1d90f","Type":"ContainerStarted","Data":"87b1c55fc887a15eed1d534f98401fa9f1353ef360325bad77bd1d77df197bac"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.121066 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9wcp7"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.133022 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.133344 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.633330299 +0000 UTC m=+144.338262964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.205657 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b488q"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.207432 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tndhs" event={"ID":"cb8403e3-f9b3-4ddf-8688-1a025a2b9291","Type":"ContainerStarted","Data":"2fbc58cab36a5d2b4e6a2405f15a520af72a5fbbdf8fc502d8c2eabf69ff0731"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.208100 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.208649 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.235822 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.236109 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.73609363 +0000 UTC m=+144.441026295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.260655 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.260719 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:44 crc kubenswrapper[4942]: W0218 19:19:44.278809 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e51d4dc_e813_4166_bb6a_45d083a09d2a.slice/crio-14f50396c8170cc2a69c2e6637c93b60bc26e9ebdd445e1de739aeb3a386b19a WatchSource:0}: Error finding container 14f50396c8170cc2a69c2e6637c93b60bc26e9ebdd445e1de739aeb3a386b19a: Status 404 returned error can't find the container with id 14f50396c8170cc2a69c2e6637c93b60bc26e9ebdd445e1de739aeb3a386b19a Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.284853 4942 generic.go:334] "Generic (PLEG): container finished" podID="cbe755cf-b7a2-4557-9368-5d71df455408" containerID="939dca11536ea0f62b8c54cd4880927921818e8fa63a125e07c0d44498b1e7c2" exitCode=0 Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.284956 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" event={"ID":"cbe755cf-b7a2-4557-9368-5d71df455408","Type":"ContainerDied","Data":"939dca11536ea0f62b8c54cd4880927921818e8fa63a125e07c0d44498b1e7c2"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.284996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" event={"ID":"cbe755cf-b7a2-4557-9368-5d71df455408","Type":"ContainerStarted","Data":"2d9ae5080d8a0911d435f2cee044bba81aa7a5e14f68397f7975e962531d6193"} Feb 18 19:19:44 crc kubenswrapper[4942]: W0218 19:19:44.306069 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod696bcbdd_c9ca_45cd_ae12_e733919e2832.slice/crio-1dc34889cbf59b8791ee47c092b48d9fc2128835cc39bc93bf9951ee5a6d0e78 WatchSource:0}: Error finding container 1dc34889cbf59b8791ee47c092b48d9fc2128835cc39bc93bf9951ee5a6d0e78: Status 404 returned error can't find the container with id 1dc34889cbf59b8791ee47c092b48d9fc2128835cc39bc93bf9951ee5a6d0e78 Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.337702 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.339415 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.839402276 +0000 UTC m=+144.544334941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.350421 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" event={"ID":"e3586689-cf81-4cd2-84d1-70b0ce221b9d","Type":"ContainerStarted","Data":"10757be4e8c30187826ef9ec48219806ca1641a5947bd7e3a04e2113cd573c9b"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.359057 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-s4kjv"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.418462 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" event={"ID":"4afc5765-32dc-4b49-b1a3-9141c2c96087","Type":"ContainerStarted","Data":"47c265a739641c55bf6470c05de2623aba65db5ef5d48e5181131b6bdf46ed0e"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.437628 4942 generic.go:334] "Generic (PLEG): container finished" podID="bccecc4d-32d0-4367-a3b6-e35ddf53dd1a" containerID="fbaafee7cecf61bb8d77ca6672d2ac8ccf91008f81ea26809234c8c633d166e3" exitCode=0 Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.437698 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" event={"ID":"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a","Type":"ContainerDied","Data":"fbaafee7cecf61bb8d77ca6672d2ac8ccf91008f81ea26809234c8c633d166e3"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.437807 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" event={"ID":"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a","Type":"ContainerStarted","Data":"0489fffbee81e5796046d641c35020c77c5b7bd4227cf3560686542a55639094"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.440005 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.440232 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.940202634 +0000 UTC m=+144.645135299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.445158 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.447966 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.947944323 +0000 UTC m=+144.652876988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.460191 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w9lpz"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.485405 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" event={"ID":"714a349f-4480-4467-9041-7cae31df7686","Type":"ContainerStarted","Data":"cd56566509dea5efaa88e554c374e00e1e507d1aed85aabf7157db6e887929bb"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.502975 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" event={"ID":"5d6ad520-b407-4b86-867b-9e9658bfa536","Type":"ContainerStarted","Data":"561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.557954 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.559931 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.059900012 +0000 UTC m=+144.764832677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.572136 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" event={"ID":"c8e73114-6ccf-40ba-94e8-437e2db303fb","Type":"ContainerStarted","Data":"ef5beed15fc536692fa5b07511ace2af8b9c9d4db6acd414d2bc77602ba4c2be"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.601224 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.606526 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5l26l" event={"ID":"5683bb73-dc7f-40ed-86cd-0c08f2d38147","Type":"ContainerStarted","Data":"76d66aaf89f1a5aa5957e318124bcfa92f6a6c37df6e5abcffc91fd45db84790"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.656598 4942 csr.go:261] certificate signing request csr-rbtjn is approved, waiting to be issued Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.663149 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s57sd" event={"ID":"67ea138c-808d-40ee-9e77-2435676f7fba","Type":"ContainerStarted","Data":"295d840e2d2cbcd9fed3e4010b0664d7b5d4e6da4e446024f4f143ed6ae594bc"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.665647 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.666130 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.166113976 +0000 UTC m=+144.871046641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.668160 4942 csr.go:257] certificate signing request csr-rbtjn is issued Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.679266 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" event={"ID":"fa346657-46eb-4817-b206-4c09d46d4a55","Type":"ContainerStarted","Data":"ef127dd826aba726a31acfac09be4ab1cb60219849d22bd68a56ddc0ec361b83"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.680156 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.687835 4942 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xbkl5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.687879 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.689302 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.717479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" event={"ID":"994be5c4-0c9d-4577-82e8-644d64c3ab1d","Type":"ContainerStarted","Data":"5b850e15f65c3ef4888ece7dbbfbdcfb365e837e820448893eebbc6203a65e52"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.719425 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" podStartSLOduration=123.719407283 podStartE2EDuration="2m3.719407283s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.717367338 +0000 UTC m=+144.422300003" watchObservedRunningTime="2026-02-18 19:19:44.719407283 +0000 UTC m=+144.424339948" Feb 18 19:19:44 crc kubenswrapper[4942]: W0218 19:19:44.720226 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01ba4570_01bb_4964_8c1d_791c25d72a1a.slice/crio-5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591 WatchSource:0}: Error finding container 5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591: Status 404 returned error can't find the container with id 5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591 Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.738547 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" event={"ID":"2407a935-a8b9-4894-baaf-7460fee3d22b","Type":"ContainerStarted","Data":"ac3114089efca6f7a31fc4b13c9fa503f6eebaa65322f6ddca1e7337eb4a3ab0"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.757085 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" event={"ID":"9a79c946-4621-4b6d-af59-6b919d125502","Type":"ContainerStarted","Data":"3e5c12681c963fd415917b457a854cb5a2dd4d42af76619484fdaf4f737d2da1"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.769018 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.772404 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.272384271 +0000 UTC m=+144.977316936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.797110 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" podStartSLOduration=122.797089617 podStartE2EDuration="2m2.797089617s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.791648861 +0000 UTC m=+144.496581526" watchObservedRunningTime="2026-02-18 19:19:44.797089617 +0000 UTC m=+144.502022282" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.804900 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" event={"ID":"709f9378-2d1c-4158-9521-e6000e06eb5e","Type":"ContainerStarted","Data":"4e59809c46fdf61cbd250efd26b5505441876d2b4287cbd6ca8c4b31f6dd6627"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.813899 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" event={"ID":"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e","Type":"ContainerStarted","Data":"c680d03d1b77f75b2e5820fafd3de47164c025235120a5a61943f07c8f9f37a9"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.813948 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.819181 4942 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g5df6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.819343 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" podUID="5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.827259 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.843682 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" podStartSLOduration=123.843663223 podStartE2EDuration="2m3.843663223s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.840285332 +0000 UTC m=+144.545218007" watchObservedRunningTime="2026-02-18 19:19:44.843663223 +0000 UTC m=+144.548595878" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.867378 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" podStartSLOduration=123.867365832 podStartE2EDuration="2m3.867365832s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.865476151 +0000 UTC m=+144.570408816" watchObservedRunningTime="2026-02-18 19:19:44.867365832 +0000 UTC m=+144.572298497" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.873428 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.874055 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.374033802 +0000 UTC m=+145.078966467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.913033 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5l26l" podStartSLOduration=123.913016413 podStartE2EDuration="2m3.913016413s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.912199671 +0000 UTC m=+144.617132336" watchObservedRunningTime="2026-02-18 19:19:44.913016413 +0000 UTC m=+144.617949078" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.974682 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.976508 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.476469474 +0000 UTC m=+145.181402139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.976917 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.980463 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.480421621 +0000 UTC m=+145.185354476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.994093 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tndhs" podStartSLOduration=123.994071439 podStartE2EDuration="2m3.994071439s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.991731536 +0000 UTC m=+144.696664201" watchObservedRunningTime="2026-02-18 19:19:44.994071439 +0000 UTC m=+144.699004104" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.078446 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.078706 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.57867393 +0000 UTC m=+145.283606595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.080490 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.112276 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.612254826 +0000 UTC m=+145.317187491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.197357 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.197952 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.697913515 +0000 UTC m=+145.402846180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.299475 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.301040 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.801024946 +0000 UTC m=+145.505957611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.387454 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" podStartSLOduration=124.387434936 podStartE2EDuration="2m4.387434936s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.386137631 +0000 UTC m=+145.091070296" watchObservedRunningTime="2026-02-18 19:19:45.387434936 +0000 UTC m=+145.092367601" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.405104 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.407872 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.907845196 +0000 UTC m=+145.612777861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.410160 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.410679 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.910664512 +0000 UTC m=+145.615597177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.515350 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" podStartSLOduration=123.51532984400001 podStartE2EDuration="2m3.515329844s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.514481451 +0000 UTC m=+145.219414116" watchObservedRunningTime="2026-02-18 19:19:45.515329844 +0000 UTC m=+145.220262509" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.515710 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.515795 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.015778086 +0000 UTC m=+145.720710751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.518455 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.518880 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.01886911 +0000 UTC m=+145.723801765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.540532 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" podStartSLOduration=125.540515423 podStartE2EDuration="2m5.540515423s" podCreationTimestamp="2026-02-18 19:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.540408081 +0000 UTC m=+145.245340746" watchObservedRunningTime="2026-02-18 19:19:45.540515423 +0000 UTC m=+145.245448088" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.618586 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" podStartSLOduration=124.618565948 podStartE2EDuration="2m4.618565948s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.618309661 +0000 UTC m=+145.323242326" watchObservedRunningTime="2026-02-18 19:19:45.618565948 +0000 UTC m=+145.323498613" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.620109 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.620623 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.120607653 +0000 UTC m=+145.825540318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.669211 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 19:14:44 +0000 UTC, rotation deadline is 2026-11-04 01:03:52.474728707 +0000 UTC Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.669248 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6197h44m6.805484182s for next certificate rotation Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.723554 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.724034 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.224002391 +0000 UTC m=+145.928935316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.825145 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.825663 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.325645802 +0000 UTC m=+146.030578467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.880225 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" event={"ID":"714a349f-4480-4467-9041-7cae31df7686","Type":"ContainerStarted","Data":"e5d2a47ba0ce96a07bff9f97878903390d678e02fc6d15bb8bb3d1df39682423"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.885304 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" event={"ID":"bb96ca2b-27a4-42e3-af7f-3514321500a3","Type":"ContainerStarted","Data":"47429456a27474748583ff17fb1dcdae21305252396f41651b4af5ddedb5f451"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.885367 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" event={"ID":"bb96ca2b-27a4-42e3-af7f-3514321500a3","Type":"ContainerStarted","Data":"bffddcd1642856a53d8a14eecb2793e81b211bf6edef12aaa5d85a3d065430f2"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.893322 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" event={"ID":"696bcbdd-c9ca-45cd-ae12-e733919e2832","Type":"ContainerStarted","Data":"7ae806a37e35c4beb7c6105bf316a46e7ad9818cc4d2fffbc5e24dbe44cb7317"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.893379 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" event={"ID":"696bcbdd-c9ca-45cd-ae12-e733919e2832","Type":"ContainerStarted","Data":"1dc34889cbf59b8791ee47c092b48d9fc2128835cc39bc93bf9951ee5a6d0e78"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.911082 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" podStartSLOduration=124.911056935 podStartE2EDuration="2m4.911056935s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.909184964 +0000 UTC m=+145.614117629" watchObservedRunningTime="2026-02-18 19:19:45.911056935 +0000 UTC m=+145.615989600" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.933508 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" event={"ID":"994be5c4-0c9d-4577-82e8-644d64c3ab1d","Type":"ContainerStarted","Data":"b1a45066242ae6994dd79542ee99045ae7535a669417f996bc8960fa7960fe6f"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.935184 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.935578 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.435564756 +0000 UTC m=+146.140497421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.946339 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" event={"ID":"5d6ad520-b407-4b86-867b-9e9658bfa536","Type":"ContainerStarted","Data":"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.948357 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.949434 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" podStartSLOduration=123.949409029 podStartE2EDuration="2m3.949409029s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.939211804 +0000 UTC m=+145.644144469" watchObservedRunningTime="2026-02-18 19:19:45.949409029 +0000 UTC m=+145.654341694" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.967433 4942 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-z4t28 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.967491 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.985787 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" event={"ID":"ae259edb-f577-48b8-b236-91656ac269d2","Type":"ContainerStarted","Data":"5cecbc2c8e7a93ff837b8535180d6419cbf30671f05bbc9aeefb44cd259a89dc"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.992981 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" podStartSLOduration=123.992953903 podStartE2EDuration="2m3.992953903s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.968147144 +0000 UTC m=+145.673079809" watchObservedRunningTime="2026-02-18 19:19:45.992953903 +0000 UTC m=+145.697886568" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.994607 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.020097 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" event={"ID":"a873b689-a8f1-4125-b97c-e9d0f6b06397","Type":"ContainerStarted","Data":"53a33a4937e1afc56a58b316072961f268a8e4a1365a52c3259f2ce5f8c81354"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.020147 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" event={"ID":"a873b689-a8f1-4125-b97c-e9d0f6b06397","Type":"ContainerStarted","Data":"89aedd7f93443a63bf862888ba62bbe12ed9d421789f7e6f7f47bb7c9ca5cc3f"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.020910 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.026026 4942 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lrcbr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.026074 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" podUID="a873b689-a8f1-4125-b97c-e9d0f6b06397" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.032674 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" event={"ID":"cbe755cf-b7a2-4557-9368-5d71df455408","Type":"ContainerStarted","Data":"8075d6be95bf7d29fcd6b4ed79f03d35824ff4d5d4466851d8cf2417fc416fa9"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.035942 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.036352 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.536306642 +0000 UTC m=+146.241239307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.036559 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.038108 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.538099471 +0000 UTC m=+146.243032136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.058198 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"ae917070a1fb2ae6049a3cb0e5e19f0b7f5904b856c84b810a11428d3d25d00f"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.089744 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" event={"ID":"05aed8e4-390c-4589-8a61-2aab50a1d90f","Type":"ContainerStarted","Data":"09fdca091a253ff9bd2dc0eac63c261789ab9c1bca446f6ff56e243061cd20cc"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.092187 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" podStartSLOduration=125.092162588 podStartE2EDuration="2m5.092162588s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.002939453 +0000 UTC m=+145.707872118" watchObservedRunningTime="2026-02-18 19:19:46.092162588 +0000 UTC m=+145.797095253" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.114153 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" podStartSLOduration=125.114127591 podStartE2EDuration="2m5.114127591s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.11410738 +0000 UTC m=+145.819040045" watchObservedRunningTime="2026-02-18 19:19:46.114127591 +0000 UTC m=+145.819060256" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.123856 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" event={"ID":"9b732dca-66e7-48c3-bd7d-5efc1d9662d7","Type":"ContainerStarted","Data":"b5a7bbd5e7ba5c2f6d9d9fc0a9eee13fffbab2434f628724afe27d5345b240b4"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.123950 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" event={"ID":"9b732dca-66e7-48c3-bd7d-5efc1d9662d7","Type":"ContainerStarted","Data":"d1d56ceab59f37ae4741e7e1c65e710e43f1e0d13713932ac9b62ba435cb6040"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.141421 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.143055 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" event={"ID":"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7","Type":"ContainerStarted","Data":"5408388b9e1ab47f7983784f7e9c54819f35fef18e080b4cab94cc3a9cc22231"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.143103 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" event={"ID":"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7","Type":"ContainerStarted","Data":"9609c11d3bd02d2379390a0678d9b91ab4b9f13828b5ffd9ce9cb1949f5a7f04"} Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.143166 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.643147263 +0000 UTC m=+146.348079928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.193968 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" event={"ID":"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a","Type":"ContainerStarted","Data":"65909acaa509149c03419b0d66bafc7e7609f918429ef17452b18ee9b7ab4fd8"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.194699 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.232144 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" event={"ID":"157afc1c-f5df-419b-a760-336d14bbbd6d","Type":"ContainerStarted","Data":"4ddc0e64f67697d4ef526590ed52dd0abc67c021061db8899066480a3187ac33"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.232585 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" event={"ID":"157afc1c-f5df-419b-a760-336d14bbbd6d","Type":"ContainerStarted","Data":"8f6ffcaa602c4c74d82a25acb920b5bc142f908014eb2a7b4111736183236ea0"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.238226 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" podStartSLOduration=124.238206595 podStartE2EDuration="2m4.238206595s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.176840331 +0000 UTC m=+145.881773006" watchObservedRunningTime="2026-02-18 19:19:46.238206595 +0000 UTC m=+145.943139260" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.239395 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" podStartSLOduration=124.239389587 podStartE2EDuration="2m4.239389587s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.236991783 +0000 UTC m=+145.941924448" watchObservedRunningTime="2026-02-18 19:19:46.239389587 +0000 UTC m=+145.944322252" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.247081 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" event={"ID":"83c8ec50-d07e-4c96-80b8-22cf232b015c","Type":"ContainerStarted","Data":"8fb89d9b578bb5f34f43df673b2dd799864eeb7e473dd13a2af65613144c2452"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.247130 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" event={"ID":"83c8ec50-d07e-4c96-80b8-22cf232b015c","Type":"ContainerStarted","Data":"f008ad0a63afef1fd1b3f75636edc934773c6ff3eb99cbd42972daa614dbabd7"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.247794 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.248451 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.250272 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.750259 +0000 UTC m=+146.455191665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.268000 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" podStartSLOduration=125.267983218 podStartE2EDuration="2m5.267983218s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.267156156 +0000 UTC m=+145.972088821" watchObservedRunningTime="2026-02-18 19:19:46.267983218 +0000 UTC m=+145.972915873" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.308864 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" podStartSLOduration=124.3088347 podStartE2EDuration="2m4.3088347s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.305787368 +0000 UTC m=+146.010720033" watchObservedRunningTime="2026-02-18 19:19:46.3088347 +0000 UTC m=+146.013767365" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.331143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" event={"ID":"c8e73114-6ccf-40ba-94e8-437e2db303fb","Type":"ContainerStarted","Data":"cdf5934d19dddd363dbdd2cf3f19ac1b20b020a1df145501f6daf86f7077de32"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.351337 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.352501 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.852481637 +0000 UTC m=+146.557414302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.358352 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" event={"ID":"efab374b-fec3-4b4e-81f1-002715812a67","Type":"ContainerStarted","Data":"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.359682 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.388455 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" podStartSLOduration=125.38843747600001 podStartE2EDuration="2m5.388437476s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.386445923 +0000 UTC m=+146.091378588" watchObservedRunningTime="2026-02-18 19:19:46.388437476 +0000 UTC m=+146.093370141" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.400077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xs9jl" event={"ID":"ae63de17-3438-46ed-94f9-5f51d8a216fd","Type":"ContainerStarted","Data":"f5e48d7471916432d9cbe7bf403fb08411929b47a2a217b6d0f003a8e4238ee0"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.400133 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xs9jl" event={"ID":"ae63de17-3438-46ed-94f9-5f51d8a216fd","Type":"ContainerStarted","Data":"78fc4d140922930f6a2633852b8874ac954b960b75d9985a28e8bdb6fdb4b4d8"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.403752 4942 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jfkrb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.403810 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.420062 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" event={"ID":"fa346657-46eb-4817-b206-4c09d46d4a55","Type":"ContainerStarted","Data":"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.429329 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" podStartSLOduration=125.429303478 podStartE2EDuration="2m5.429303478s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.420912162 +0000 UTC m=+146.125844827" watchObservedRunningTime="2026-02-18 19:19:46.429303478 +0000 UTC m=+146.134236143" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.439659 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.439879 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" event={"ID":"51ed31a1-9bf0-40ff-8bca-041d691662b4","Type":"ContainerStarted","Data":"208faa74619d8143ee886fd8c9a08a6db8b9a28d4026793c511e3bb3bd1b1a6e"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.439908 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" event={"ID":"51ed31a1-9bf0-40ff-8bca-041d691662b4","Type":"ContainerStarted","Data":"caafd366ff0d6b72e6e5c23941738ac2c65895cdf5d540470d5723384db002c6"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.455462 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.455852 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.955839814 +0000 UTC m=+146.660772479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.476163 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" event={"ID":"0e51d4dc-e813-4166-bb6a-45d083a09d2a","Type":"ContainerStarted","Data":"43de3611589dad5ebefcd809b6f805da55d469f91eabf8ba425d9f9754800f2b"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.476226 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" event={"ID":"0e51d4dc-e813-4166-bb6a-45d083a09d2a","Type":"ContainerStarted","Data":"6099bd6de926b0c3866a897a61dc3972f770d5e22459eb21f11d46fc377c9b7b"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.476236 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" event={"ID":"0e51d4dc-e813-4166-bb6a-45d083a09d2a","Type":"ContainerStarted","Data":"14f50396c8170cc2a69c2e6637c93b60bc26e9ebdd445e1de739aeb3a386b19a"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.487608 4942 generic.go:334] "Generic (PLEG): container finished" podID="6890b7aa-fac3-4c00-90cc-4618ddfae25e" containerID="aabc415517a0dc7244ba58d2c2fc6db9a02923059a6710af42a0290ab193e41b" exitCode=0 Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.487727 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" event={"ID":"6890b7aa-fac3-4c00-90cc-4618ddfae25e","Type":"ContainerDied","Data":"aabc415517a0dc7244ba58d2c2fc6db9a02923059a6710af42a0290ab193e41b"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.504180 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" event={"ID":"01ba4570-01bb-4964-8c1d-791c25d72a1a","Type":"ContainerStarted","Data":"5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.523515 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" podStartSLOduration=124.523487548 podStartE2EDuration="2m4.523487548s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.472188295 +0000 UTC m=+146.177120960" watchObservedRunningTime="2026-02-18 19:19:46.523487548 +0000 UTC m=+146.228420213" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.541095 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" event={"ID":"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae","Type":"ContainerStarted","Data":"5a2671a1f18882459c61a0a3dd8093d7f9cf77bed4bcbedd9e017981b458f1ea"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.541485 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" event={"ID":"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae","Type":"ContainerStarted","Data":"8132c3f599f90ae6aae6c73184d377f311bcfa30b5c6d6e3837d5606c2fe285d"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.553179 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" event={"ID":"2407a935-a8b9-4894-baaf-7460fee3d22b","Type":"ContainerStarted","Data":"da2cd9ae03daf77b5cfa46a4a680a7d22744c4fe54ef89d19fd2f943747bebfa"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.561047 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.564033 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.06400645 +0000 UTC m=+146.768939115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.564451 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" podStartSLOduration=124.564435232 podStartE2EDuration="2m4.564435232s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.52653934 +0000 UTC m=+146.231472005" watchObservedRunningTime="2026-02-18 19:19:46.564435232 +0000 UTC m=+146.269367897" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.568881 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5l26l" event={"ID":"5683bb73-dc7f-40ed-86cd-0c08f2d38147","Type":"ContainerStarted","Data":"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.586286 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" podStartSLOduration=125.586264391 podStartE2EDuration="2m5.586264391s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.564528094 +0000 UTC m=+146.269460759" watchObservedRunningTime="2026-02-18 19:19:46.586264391 +0000 UTC m=+146.291197056" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.612666 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" event={"ID":"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e","Type":"ContainerStarted","Data":"b851a866367cf3a9e9d464f309cf034e80cf40b320f83dc8f140d81b8ccea539"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.637532 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s57sd" event={"ID":"67ea138c-808d-40ee-9e77-2435676f7fba","Type":"ContainerStarted","Data":"88a9b6c20ec29b02eb14c5a0666366dcf54085c50b07e9ba5fbb1b8473e769ea"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.651680 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" event={"ID":"0ec933ee-8c36-49a0-8ba5-c7442f4de367","Type":"ContainerStarted","Data":"73b7ee7c3ba82bf630f09cb666ee9aa5b699be5d474b17959eab9af478c5664e"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.651727 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" event={"ID":"0ec933ee-8c36-49a0-8ba5-c7442f4de367","Type":"ContainerStarted","Data":"2cc63270a69f0d0195f40815e636ff4b43d169b2bfb1846e3b0329458017f944"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.656651 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" podStartSLOduration=124.656631108 podStartE2EDuration="2m4.656631108s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.654977303 +0000 UTC m=+146.359909968" watchObservedRunningTime="2026-02-18 19:19:46.656631108 +0000 UTC m=+146.361563773" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.656922 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xs9jl" podStartSLOduration=6.656917086 podStartE2EDuration="6.656917086s" podCreationTimestamp="2026-02-18 19:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.614508892 +0000 UTC m=+146.319441557" watchObservedRunningTime="2026-02-18 19:19:46.656917086 +0000 UTC m=+146.361849751" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.657176 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.662666 4942 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zz9rm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.662740 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" podUID="0ec933ee-8c36-49a0-8ba5-c7442f4de367" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.663546 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.667189 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.167172282 +0000 UTC m=+146.872104947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.710695 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" event={"ID":"9a79c946-4621-4b6d-af59-6b919d125502","Type":"ContainerStarted","Data":"48644340e380b46844ef607e947318dad2a4df524b20e1b04fb054fdc4960453"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.755903 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" event={"ID":"e3586689-cf81-4cd2-84d1-70b0ce221b9d","Type":"ContainerStarted","Data":"be4b176c03020dfebf3570b33951e652a3097cc6fdcfee90689aa5a181dc3945"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.756264 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" event={"ID":"e3586689-cf81-4cd2-84d1-70b0ce221b9d","Type":"ContainerStarted","Data":"e9e909db52ba119b9620f5e6e0717d04945c6a6467173bcb1c7bc30a6b9c5e35"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.766283 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.767052 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.267033765 +0000 UTC m=+146.971966420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.789610 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" podStartSLOduration=125.789594513 podStartE2EDuration="2m5.789594513s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.788174345 +0000 UTC m=+146.493107010" watchObservedRunningTime="2026-02-18 19:19:46.789594513 +0000 UTC m=+146.494527168" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.797082 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" event={"ID":"48a8b317-27eb-4d20-93ad-37fa559ec858","Type":"ContainerStarted","Data":"ce2530ce2afe97ddb9fdeab46bbfef9f1ded96ff21bc7fe65fb26a0a5f0a5540"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.797145 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" event={"ID":"48a8b317-27eb-4d20-93ad-37fa559ec858","Type":"ContainerStarted","Data":"fde6f5692dc02c57cf442750d36b31132e33ff72cb1eaa4406c7ce4e5cbbfb65"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.835063 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fgw8l" event={"ID":"8134898c-a265-4fa0-8548-075ea0812b7b","Type":"ContainerStarted","Data":"986d51155a2753a87d6ac316d906b8a21b7b92f640efbc7ca4b3ecf774fa6938"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.835118 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fgw8l" event={"ID":"8134898c-a265-4fa0-8548-075ea0812b7b","Type":"ContainerStarted","Data":"0c447cb8f794395e4c9ce2034f6bb4715b0be332558abecd230814c64a4a0eac"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.862359 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" podStartSLOduration=124.862335995 podStartE2EDuration="2m4.862335995s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.861205544 +0000 UTC m=+146.566138209" watchObservedRunningTime="2026-02-18 19:19:46.862335995 +0000 UTC m=+146.567268660" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.870998 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s4kjv" event={"ID":"461a0658-ae3b-4972-8122-2719276793b9","Type":"ContainerStarted","Data":"75e51a4243e3dc24eeba0de1cbc9eefb0a23eb0bb0b0a08204eb6a5396608558"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.871050 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.871363 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.871406 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.872466 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.872755 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.372745045 +0000 UTC m=+147.077677710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.885513 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.906114 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" podStartSLOduration=125.906093275 podStartE2EDuration="2m5.906093275s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.904858501 +0000 UTC m=+146.609791176" watchObservedRunningTime="2026-02-18 19:19:46.906093275 +0000 UTC m=+146.611025930" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.977818 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-s57sd" podStartSLOduration=7.977798278 podStartE2EDuration="7.977798278s" podCreationTimestamp="2026-02-18 19:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.941376116 +0000 UTC m=+146.646308781" watchObservedRunningTime="2026-02-18 19:19:46.977798278 +0000 UTC m=+146.682730943" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.978692 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" podStartSLOduration=124.978686262 podStartE2EDuration="2m4.978686262s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.977273174 +0000 UTC m=+146.682205839" watchObservedRunningTime="2026-02-18 19:19:46.978686262 +0000 UTC m=+146.683618927" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.984251 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.985905 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.485876056 +0000 UTC m=+147.190808721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.064725 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" podStartSLOduration=126.064700671 podStartE2EDuration="2m6.064700671s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.023047818 +0000 UTC m=+146.727980483" watchObservedRunningTime="2026-02-18 19:19:47.064700671 +0000 UTC m=+146.769633336" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.087080 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.087440 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.587424044 +0000 UTC m=+147.292356709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.113399 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" podStartSLOduration=126.113375344 podStartE2EDuration="2m6.113375344s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.108551484 +0000 UTC m=+146.813484149" watchObservedRunningTime="2026-02-18 19:19:47.113375344 +0000 UTC m=+146.818308009" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.113951 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-s4kjv" podStartSLOduration=7.113947759 podStartE2EDuration="7.113947759s" podCreationTimestamp="2026-02-18 19:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.06909876 +0000 UTC m=+146.774031425" watchObservedRunningTime="2026-02-18 19:19:47.113947759 +0000 UTC m=+146.818880424" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.190419 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.190527 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.690506284 +0000 UTC m=+147.395438949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.190876 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.191229 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.691222083 +0000 UTC m=+147.396154748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.193006 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" podStartSLOduration=125.19298494 podStartE2EDuration="2m5.19298494s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.191588963 +0000 UTC m=+146.896521628" watchObservedRunningTime="2026-02-18 19:19:47.19298494 +0000 UTC m=+146.897917605" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.255466 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fgw8l" podStartSLOduration=126.255443565 podStartE2EDuration="2m6.255443565s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.251254912 +0000 UTC m=+146.956187577" watchObservedRunningTime="2026-02-18 19:19:47.255443565 +0000 UTC m=+146.960376230" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.275675 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.275771 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.293783 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.294308 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.794284852 +0000 UTC m=+147.499217517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.387517 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.395934 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.396291 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.896274902 +0000 UTC m=+147.601207567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.397589 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:47 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:47 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:47 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.397673 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.497701 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.498089 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.998055036 +0000 UTC m=+147.702987701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.498459 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.498824 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.998809597 +0000 UTC m=+147.703742262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.599432 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.599922 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.099902303 +0000 UTC m=+147.804834968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.615950 4942 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g5df6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.616051 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" podUID="5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.701591 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.702138 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.202115199 +0000 UTC m=+147.907047864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.802833 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.803314 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.303280417 +0000 UTC m=+148.008213082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.876141 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" event={"ID":"51ed31a1-9bf0-40ff-8bca-041d691662b4","Type":"ContainerStarted","Data":"32ad471fc189e5a633ffd8159099200358fe5c48344c0dc9526ac321b7f1c8f5"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.879200 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" event={"ID":"6890b7aa-fac3-4c00-90cc-4618ddfae25e","Type":"ContainerStarted","Data":"7748e5ef607983d5270cd4243cc208c0398aafb4dd52ff8a17b3a606606813a9"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.879266 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" event={"ID":"6890b7aa-fac3-4c00-90cc-4618ddfae25e","Type":"ContainerStarted","Data":"b8cc889c625035d34efbc631c55c3b0b102ba0591c1e64a53cd13bcf88045c57"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.880839 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" event={"ID":"01ba4570-01bb-4964-8c1d-791c25d72a1a","Type":"ContainerStarted","Data":"5fb82fb77a7895a43a30ace42481cf4c1da624e8742b15c1cb5a5cf3044d7c22"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.882092 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"1314083d10e40e71c4cd17f089a72147abfcc4fee1bb370d542c298f25e78b02"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.883479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" event={"ID":"05aed8e4-390c-4589-8a61-2aab50a1d90f","Type":"ContainerStarted","Data":"5e9e204518cb98e53f5cfb13561837e69ea57625a59ec69cdf232fc75373a59e"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.885326 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" event={"ID":"83c8ec50-d07e-4c96-80b8-22cf232b015c","Type":"ContainerStarted","Data":"5ce9d1a50c6dcfc37ebf364214ff445874e411fd19d28dd01d5dd58e037a60ad"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.887244 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s4kjv" event={"ID":"461a0658-ae3b-4972-8122-2719276793b9","Type":"ContainerStarted","Data":"3f300ea2b013a0074ea8815ac3e6dd2bde21d5361e0e174bd8c460315a72b91d"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.887271 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s4kjv" event={"ID":"461a0658-ae3b-4972-8122-2719276793b9","Type":"ContainerStarted","Data":"2197dd032944bf4c31c8f13538d2e095332de34388cbac0f53a2ad55277f35ca"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.888638 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" event={"ID":"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7","Type":"ContainerStarted","Data":"9b0e82244d95209b70718ead33145da919469f201aed63bae4f5aeff682b279e"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.889583 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.889638 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.897900 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.900542 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.904853 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.905255 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.405240936 +0000 UTC m=+148.110173591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.909178 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.919970 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" podStartSLOduration=126.919954533 podStartE2EDuration="2m6.919954533s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.918678388 +0000 UTC m=+147.623611053" watchObservedRunningTime="2026-02-18 19:19:47.919954533 +0000 UTC m=+147.624887188" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.922224 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.948607 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.006068 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.008987 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.508958933 +0000 UTC m=+148.213891808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.055130 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" podStartSLOduration=126.055107977 podStartE2EDuration="2m6.055107977s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:48.026312701 +0000 UTC m=+147.731245366" watchObservedRunningTime="2026-02-18 19:19:48.055107977 +0000 UTC m=+147.760040652" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.108887 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.109228 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.609214856 +0000 UTC m=+148.314147521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.221081 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.221544 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.721496944 +0000 UTC m=+148.426429609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.324457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.325189 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.825167039 +0000 UTC m=+148.530099704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.385634 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:48 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:48 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:48 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.385692 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.426461 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.426864 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.926845141 +0000 UTC m=+148.631777806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.448300 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.480364 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.528800 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.529571 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.02955667 +0000 UTC m=+148.734489335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.630236 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.630500 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.130468111 +0000 UTC m=+148.835400776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.631029 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.631474 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.131458528 +0000 UTC m=+148.836391203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.733230 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.734285 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.23426677 +0000 UTC m=+148.939199435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.776102 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.777541 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.781250 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.835686 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.836155 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.336139447 +0000 UTC m=+149.041072102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.859563 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.903295 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"7cda0c908c76ae8935029e4e7acf22d17e9f3e667885843090b60d581423bc78"} Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.903358 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"6c5356081faf18c6411088b4f8fffb5713c5e43020c169a23e6820e7fb892b5b"} Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.905557 4942 generic.go:334] "Generic (PLEG): container finished" podID="01ba4570-01bb-4964-8c1d-791c25d72a1a" containerID="5fb82fb77a7895a43a30ace42481cf4c1da624e8742b15c1cb5a5cf3044d7c22" exitCode=0 Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.905815 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" event={"ID":"01ba4570-01bb-4964-8c1d-791c25d72a1a","Type":"ContainerDied","Data":"5fb82fb77a7895a43a30ace42481cf4c1da624e8742b15c1cb5a5cf3044d7c22"} Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.916080 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.936772 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.937036 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.436989577 +0000 UTC m=+149.141922282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.937122 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.937335 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.937510 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.937605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.937781 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.437740337 +0000 UTC m=+149.142673002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.981325 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.982485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.990729 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.039191 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.039479 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.040271 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.040558 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.041073 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.541051382 +0000 UTC m=+149.245984047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.042436 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.044334 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.086893 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.096303 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.141742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.141845 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.141879 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.141931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.142401 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.642383215 +0000 UTC m=+149.347315880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.166068 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.167109 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.192104 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.242777 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.243135 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.243183 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.243240 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.244214 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.244311 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.744294223 +0000 UTC m=+149.449226888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.244645 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.274416 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.295395 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.344791 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.344840 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.344887 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.344925 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.345478 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.84543871 +0000 UTC m=+149.550371455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.392018 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:49 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:49 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:49 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.392136 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.396478 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.404635 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.405650 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.427753 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.446689 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.447181 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.447235 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.447325 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.447946 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.947905033 +0000 UTC m=+149.652837718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.449277 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.449294 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.507163 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.548988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.549051 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.549083 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.549111 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.549468 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.049455331 +0000 UTC m=+149.754387996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.652406 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.652679 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.152645274 +0000 UTC m=+149.857577939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.652794 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.652931 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.652975 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.653041 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.653612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.653847 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.654099 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.154092513 +0000 UTC m=+149.859025178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.695739 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.722224 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.754945 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.755819 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.255798214 +0000 UTC m=+149.960730869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.795100 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.857052 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.857417 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.357405374 +0000 UTC m=+150.062338039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.875161 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.892253 4942 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.962052 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.962345 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.462279192 +0000 UTC m=+150.167211857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.962466 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.962831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.962876 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.963044 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.463025342 +0000 UTC m=+150.167958007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.969273 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"6e237fc824969bf20176670a4af0fe4f179c5f11b94b08b852ef7d05237298da"} Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.970127 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.973140 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.031447 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" podStartSLOduration=11.031419196 podStartE2EDuration="11.031419196s" podCreationTimestamp="2026-02-18 19:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:50.022535117 +0000 UTC m=+149.727467782" watchObservedRunningTime="2026-02-18 19:19:50.031419196 +0000 UTC m=+149.736351861" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.051483 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.064347 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.065652 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.565632809 +0000 UTC m=+150.270565474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.092218 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.160947 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.174160 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.174644 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.174680 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.674658329 +0000 UTC m=+150.379590984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.174743 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.179981 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.191923 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.249672 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.275617 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.276037 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.775994591 +0000 UTC m=+150.480927266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.377683 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.378037 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.878024282 +0000 UTC m=+150.582956947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.378307 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.390615 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:50 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:50 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:50 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.390675 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.404846 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.409499 4942 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T19:19:49.892280804Z","Handler":null,"Name":""} Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.411840 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.412565 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.424316 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.428824 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.429875 4942 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.429925 4942 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.479999 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.481151 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.481246 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.486456 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.505382 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.582052 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.582114 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.582159 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.582250 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.608569 4942 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.608610 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.639640 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.687386 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.740328 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.789271 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.789467 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") pod \"01ba4570-01bb-4964-8c1d-791c25d72a1a\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.789513 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") pod \"01ba4570-01bb-4964-8c1d-791c25d72a1a\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.789664 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") pod \"01ba4570-01bb-4964-8c1d-791c25d72a1a\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.790586 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume" (OuterVolumeSpecName: "config-volume") pod "01ba4570-01bb-4964-8c1d-791c25d72a1a" (UID: "01ba4570-01bb-4964-8c1d-791c25d72a1a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.796895 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "01ba4570-01bb-4964-8c1d-791c25d72a1a" (UID: "01ba4570-01bb-4964-8c1d-791c25d72a1a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.797117 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx" (OuterVolumeSpecName: "kube-api-access-7g7fx") pod "01ba4570-01bb-4964-8c1d-791c25d72a1a" (UID: "01ba4570-01bb-4964-8c1d-791c25d72a1a"). InnerVolumeSpecName "kube-api-access-7g7fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:19:50 crc kubenswrapper[4942]: W0218 19:19:50.820233 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a093cb01c937fdb5121367f0adbbd61b3fc43c52df093cc472879fb6fbf77971 WatchSource:0}: Error finding container a093cb01c937fdb5121367f0adbbd61b3fc43c52df093cc472879fb6fbf77971: Status 404 returned error can't find the container with id a093cb01c937fdb5121367f0adbbd61b3fc43c52df093cc472879fb6fbf77971 Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.844233 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.891229 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.891255 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.891265 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:50 crc kubenswrapper[4942]: W0218 19:19:50.922467 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b3bfb2a8535c25a420ab989ee36a2598252571329bfa55dddc61982069c783f6 WatchSource:0}: Error finding container b3bfb2a8535c25a420ab989ee36a2598252571329bfa55dddc61982069c783f6: Status 404 returned error can't find the container with id b3bfb2a8535c25a420ab989ee36a2598252571329bfa55dddc61982069c783f6 Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.950295 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.950520 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ba4570-01bb-4964-8c1d-791c25d72a1a" containerName="collect-profiles" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.950533 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ba4570-01bb-4964-8c1d-791c25d72a1a" containerName="collect-profiles" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.950640 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ba4570-01bb-4964-8c1d-791c25d72a1a" containerName="collect-profiles" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.951654 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.953870 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.959010 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.017065 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" event={"ID":"01ba4570-01bb-4964-8c1d-791c25d72a1a","Type":"ContainerDied","Data":"5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.017121 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.017184 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.024526 4942 generic.go:334] "Generic (PLEG): container finished" podID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerID="75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f" exitCode=0 Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.024568 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerDied","Data":"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.024583 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerStarted","Data":"903844334b076d9d3fb48a98e733d182c6c0ea5de7f8aeb1362b7e203a4a8fa4"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.026124 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.030627 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"656ad242e374557a7c0f48ee3bd1592417527c18ddcd2b2f34caa19c26d34b58"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.046582 4942 generic.go:334] "Generic (PLEG): container finished" podID="02b9174b-0251-447d-8266-56e92f6e9be1" containerID="01a84768c2d4f4eb7b1180f3d4ce6ea22f3b2fc585b9417ed7bc6475cacbd4a4" exitCode=0 Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.053581 4942 generic.go:334] "Generic (PLEG): container finished" podID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerID="e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4" exitCode=0 Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.053940 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.054802 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerDied","Data":"01a84768c2d4f4eb7b1180f3d4ce6ea22f3b2fc585b9417ed7bc6475cacbd4a4"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.054835 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerStarted","Data":"8c6af65ee9862a635b1667bd69dde2c1cdffc885f9052d205608bd240b148144"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.054862 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerDied","Data":"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.054877 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerStarted","Data":"48be7d221e592c508e0024c55b4c7ad66329680b58e7532a74bd5a930a0ac4bd"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.065066 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a093cb01c937fdb5121367f0adbbd61b3fc43c52df093cc472879fb6fbf77971"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.068897 4942 generic.go:334] "Generic (PLEG): container finished" podID="934bc032-4641-47ee-9689-39edb4e5a24a" containerID="7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7" exitCode=0 Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.068966 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerDied","Data":"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.068996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerStarted","Data":"5c780f3eaf3a7663544d07b41d9cc753cd4008f1802dbe09d0227e582dd487c7"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.073430 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b3bfb2a8535c25a420ab989ee36a2598252571329bfa55dddc61982069c783f6"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.094249 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.094297 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.094329 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.198172 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.198292 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.198339 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.206754 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.206866 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.255735 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.269638 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.282506 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.362296 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.363704 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.380160 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.384913 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:51 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:51 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:51 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.384975 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.400576 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.400835 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.400935 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.477142 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.501556 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.501703 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.501831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.502531 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.503168 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.531756 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.610615 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.824358 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.954429 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.958253 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.975358 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.983433 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.009232 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.009411 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.009469 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.048053 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.096970 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d362dd3-7195-4c71-9a1c-b4170b339f6d","Type":"ContainerStarted","Data":"d611b2fd6d19ba6e77558cb0cceca6c0fc49ca36f03c6714a84c829a14578ad1"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.097026 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d362dd3-7195-4c71-9a1c-b4170b339f6d","Type":"ContainerStarted","Data":"770c328bd70507fcba345ada4ee9d1bbc463303df462a624765af1530b1f96a8"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.103383 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b3a6fc6222411fdb47064907c4990ec6c44fd3b214fc9a57b775bd8ce1fc878f"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.105375 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9b470e6050affd8d474fdc4cac7d36379ebcd294fa3642bdcd62d0bf86676651"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.105825 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110560 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110566 4942 generic.go:334] "Generic (PLEG): container finished" podID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerID="656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a" exitCode=0 Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110689 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110705 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerDied","Data":"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110835 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerStarted","Data":"1342033222b8b7017fedfcc1a993530ce3bb6c2c950b03c672270884763e7952"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.111640 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.112063 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.118198 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerStarted","Data":"bf827b49c615857eb54c5c1b4eb25133056e0a9065497fbb34a9215010ac6e9f"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.123543 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c30f7cef23c7c260de5e039e122b4d9c004a73673d2db332b83648495c2b3ced"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.132494 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.134221 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" event={"ID":"087f0c6b-3e9f-4db4-bbcb-a8075e218219","Type":"ContainerStarted","Data":"91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.134274 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" event={"ID":"087f0c6b-3e9f-4db4-bbcb-a8075e218219","Type":"ContainerStarted","Data":"c9af7faf6591829dd44fe7e25f59f09e1004d7cfb6e0f93079ef222657176a3e"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.134681 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.149243 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.149209141 podStartE2EDuration="2.149209141s" podCreationTimestamp="2026-02-18 19:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:52.108792082 +0000 UTC m=+151.813724757" watchObservedRunningTime="2026-02-18 19:19:52.149209141 +0000 UTC m=+151.854141806" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.171421 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" podStartSLOduration=131.17140105 podStartE2EDuration="2m11.17140105s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:52.167506415 +0000 UTC m=+151.872439080" watchObservedRunningTime="2026-02-18 19:19:52.17140105 +0000 UTC m=+151.876333715" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.211978 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.211991 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.212060 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.212098 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.297712 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.364147 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.365981 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.378552 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.392211 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:52 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:52 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:52 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.392407 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.414933 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.414995 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.415083 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.558795 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.558951 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.559242 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.559581 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.559835 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.566339 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.567948 4942 patch_prober.go:28] interesting pod/console-f9d7485db-5l26l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.567999 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5l26l" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.568207 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.580556 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.680738 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.682846 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.694157 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.695323 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.789127 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.161859 4942 generic.go:334] "Generic (PLEG): container finished" podID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerID="45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980" exitCode=0 Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.162049 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerDied","Data":"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980"} Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.162307 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerStarted","Data":"56923a9d84e1c384a4de3a0f2cac66f27ae78aee76d844588bcd57af55695ead"} Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.165724 4942 generic.go:334] "Generic (PLEG): container finished" podID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerID="b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd" exitCode=0 Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.165822 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerDied","Data":"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd"} Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.180240 4942 generic.go:334] "Generic (PLEG): container finished" podID="4d362dd3-7195-4c71-9a1c-b4170b339f6d" containerID="d611b2fd6d19ba6e77558cb0cceca6c0fc49ca36f03c6714a84c829a14578ad1" exitCode=0 Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.193954 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d362dd3-7195-4c71-9a1c-b4170b339f6d","Type":"ContainerDied","Data":"d611b2fd6d19ba6e77558cb0cceca6c0fc49ca36f03c6714a84c829a14578ad1"} Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.200927 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.366344 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.380264 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.385369 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:53 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:53 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:53 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.385431 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:53 crc kubenswrapper[4942]: W0218 19:19:53.432960 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44e2bd42_1f23_4563_a1d2_7765ab9181f6.slice/crio-076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839 WatchSource:0}: Error finding container 076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839: Status 404 returned error can't find the container with id 076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839 Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.483273 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.484018 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.490332 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.491218 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.502028 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.601808 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.601882 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.709452 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.709885 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.709898 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.742198 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.742256 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.744893 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.805397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.088278 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.189419 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aba94fa-2207-4cae-8a64-536109c9c967","Type":"ContainerStarted","Data":"8b70d6fbaa88c2459201d464b33a2dadc0c468a0749f9e8892f0e9c58f4a80f0"} Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.195810 4942 generic.go:334] "Generic (PLEG): container finished" podID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerID="9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74" exitCode=0 Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.196102 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerDied","Data":"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74"} Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.196139 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerStarted","Data":"076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839"} Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.412885 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:54 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:54 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:54 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.412953 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.448637 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.525587 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") pod \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.526881 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") pod \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.526981 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d362dd3-7195-4c71-9a1c-b4170b339f6d" (UID: "4d362dd3-7195-4c71-9a1c-b4170b339f6d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.527240 4942 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.535260 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d362dd3-7195-4c71-9a1c-b4170b339f6d" (UID: "4d362dd3-7195-4c71-9a1c-b4170b339f6d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.628285 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.203873 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d362dd3-7195-4c71-9a1c-b4170b339f6d","Type":"ContainerDied","Data":"770c328bd70507fcba345ada4ee9d1bbc463303df462a624765af1530b1f96a8"} Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.203919 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="770c328bd70507fcba345ada4ee9d1bbc463303df462a624765af1530b1f96a8" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.204016 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.212860 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aba94fa-2207-4cae-8a64-536109c9c967","Type":"ContainerStarted","Data":"d09193c45f6e62ddc9dbffb92ad54e0e6ea5a2a3f0c6a2dae876860e6f985516"} Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.250529 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.273667 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.2732953 podStartE2EDuration="2.2732953s" podCreationTimestamp="2026-02-18 19:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.222669684 +0000 UTC m=+154.927602349" watchObservedRunningTime="2026-02-18 19:19:55.2732953 +0000 UTC m=+154.978227965" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.384929 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:55 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:55 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:55 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.385333 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:56 crc kubenswrapper[4942]: I0218 19:19:56.226017 4942 generic.go:334] "Generic (PLEG): container finished" podID="6aba94fa-2207-4cae-8a64-536109c9c967" containerID="d09193c45f6e62ddc9dbffb92ad54e0e6ea5a2a3f0c6a2dae876860e6f985516" exitCode=0 Feb 18 19:19:56 crc kubenswrapper[4942]: I0218 19:19:56.226077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aba94fa-2207-4cae-8a64-536109c9c967","Type":"ContainerDied","Data":"d09193c45f6e62ddc9dbffb92ad54e0e6ea5a2a3f0c6a2dae876860e6f985516"} Feb 18 19:19:56 crc kubenswrapper[4942]: I0218 19:19:56.383392 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:56 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:56 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:56 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:56 crc kubenswrapper[4942]: I0218 19:19:56.383464 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:57 crc kubenswrapper[4942]: I0218 19:19:57.382024 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:57 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:57 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:57 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:57 crc kubenswrapper[4942]: I0218 19:19:57.382133 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:58 crc kubenswrapper[4942]: I0218 19:19:58.382972 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:58 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:58 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:58 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:58 crc kubenswrapper[4942]: I0218 19:19:58.383308 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:59 crc kubenswrapper[4942]: I0218 19:19:59.383295 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:59 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:59 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:59 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:59 crc kubenswrapper[4942]: I0218 19:19:59.383362 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:00 crc kubenswrapper[4942]: I0218 19:20:00.382534 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:00 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:20:00 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:20:00 crc kubenswrapper[4942]: healthz check failed Feb 18 19:20:00 crc kubenswrapper[4942]: I0218 19:20:00.382598 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:01 crc kubenswrapper[4942]: I0218 19:20:01.383355 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:01 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:20:01 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:20:01 crc kubenswrapper[4942]: healthz check failed Feb 18 19:20:01 crc kubenswrapper[4942]: I0218 19:20:01.383420 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.216429 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.382564 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:02 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:20:02 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:20:02 crc kubenswrapper[4942]: healthz check failed Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.382620 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.566339 4942 patch_prober.go:28] interesting pod/console-f9d7485db-5l26l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.566406 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5l26l" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.777505 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.841431 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") pod \"6aba94fa-2207-4cae-8a64-536109c9c967\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.841555 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") pod \"6aba94fa-2207-4cae-8a64-536109c9c967\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.841587 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6aba94fa-2207-4cae-8a64-536109c9c967" (UID: "6aba94fa-2207-4cae-8a64-536109c9c967"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.845003 4942 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.854872 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6aba94fa-2207-4cae-8a64-536109c9c967" (UID: "6aba94fa-2207-4cae-8a64-536109c9c967"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.946754 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.306177 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aba94fa-2207-4cae-8a64-536109c9c967","Type":"ContainerDied","Data":"8b70d6fbaa88c2459201d464b33a2dadc0c468a0749f9e8892f0e9c58f4a80f0"} Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.306248 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b70d6fbaa88c2459201d464b33a2dadc0c468a0749f9e8892f0e9c58f4a80f0" Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.306342 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.388355 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.391526 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:20:04 crc kubenswrapper[4942]: I0218 19:20:04.280836 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:20:04 crc kubenswrapper[4942]: I0218 19:20:04.472236 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:20:04 crc kubenswrapper[4942]: I0218 19:20:04.761159 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:20:04 crc kubenswrapper[4942]: I0218 19:20:04.996860 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qwg6q"] Feb 18 19:20:05 crc kubenswrapper[4942]: W0218 19:20:05.006034 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac5b5f40_34db_4aeb_abb4_57204673bd53.slice/crio-17a3cc3533e30c7fbfd9660f8bcd8c7e569611b1699b0b92c095537673688fef WatchSource:0}: Error finding container 17a3cc3533e30c7fbfd9660f8bcd8c7e569611b1699b0b92c095537673688fef: Status 404 returned error can't find the container with id 17a3cc3533e30c7fbfd9660f8bcd8c7e569611b1699b0b92c095537673688fef Feb 18 19:20:05 crc kubenswrapper[4942]: I0218 19:20:05.320913 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" event={"ID":"ac5b5f40-34db-4aeb-abb4-57204673bd53","Type":"ContainerStarted","Data":"17a3cc3533e30c7fbfd9660f8bcd8c7e569611b1699b0b92c095537673688fef"} Feb 18 19:20:07 crc kubenswrapper[4942]: I0218 19:20:07.335034 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" event={"ID":"ac5b5f40-34db-4aeb-abb4-57204673bd53","Type":"ContainerStarted","Data":"ec8b6a4ddaadb7281693f90b113cfc80e98418e1a90f41db6206c8f1d36cc3f6"} Feb 18 19:20:08 crc kubenswrapper[4942]: I0218 19:20:08.347668 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" event={"ID":"ac5b5f40-34db-4aeb-abb4-57204673bd53","Type":"ContainerStarted","Data":"9e9f58eac6f7fe85b907639b1da53c3daf775ef93c64e93576c0047e49dcd4b1"} Feb 18 19:20:09 crc kubenswrapper[4942]: I0218 19:20:09.377480 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qwg6q" podStartSLOduration=148.377453696 podStartE2EDuration="2m28.377453696s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:09.372569425 +0000 UTC m=+169.077502090" watchObservedRunningTime="2026-02-18 19:20:09.377453696 +0000 UTC m=+169.082386361" Feb 18 19:20:10 crc kubenswrapper[4942]: I0218 19:20:10.797903 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:20:12 crc kubenswrapper[4942]: I0218 19:20:12.579391 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:20:12 crc kubenswrapper[4942]: I0218 19:20:12.589850 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.663116 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.664259 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8vnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vrlpg_openshift-marketplace(07639322-4f8b-47d5-85c7-da678ca9eaf1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.665540 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vrlpg" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.697492 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.697704 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db9ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-w75d5_openshift-marketplace(f8dc55ee-28aa-4789-96c1-0809c7abdc99): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.699125 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-w75d5" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.193459 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vrlpg" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.193486 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-w75d5" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.288009 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.288448 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98p66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tk5v7_openshift-marketplace(934bc032-4641-47ee-9689-39edb4e5a24a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.289934 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tk5v7" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.335973 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.336146 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gjnbk_openshift-marketplace(a7f05662-6e61-4d86-8a52-13000d4bd2be): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.337371 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gjnbk" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.436500 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tk5v7" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.436682 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gjnbk" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.129297 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.437451 4942 generic.go:334] "Generic (PLEG): container finished" podID="02b9174b-0251-447d-8266-56e92f6e9be1" containerID="31a34a3009919984a90fc292e1925ef6a14bfec470e5201664bf267723f6d086" exitCode=0 Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.437524 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerDied","Data":"31a34a3009919984a90fc292e1925ef6a14bfec470e5201664bf267723f6d086"} Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.440866 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerStarted","Data":"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b"} Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.442569 4942 generic.go:334] "Generic (PLEG): container finished" podID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerID="798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65" exitCode=0 Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.442632 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerDied","Data":"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65"} Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.445499 4942 generic.go:334] "Generic (PLEG): container finished" podID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerID="a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848" exitCode=0 Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.445519 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerDied","Data":"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848"} Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.740737 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.741148 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.783801 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:20:24 crc kubenswrapper[4942]: I0218 19:20:24.456247 4942 generic.go:334] "Generic (PLEG): container finished" podID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerID="f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b" exitCode=0 Feb 18 19:20:24 crc kubenswrapper[4942]: I0218 19:20:24.456299 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerDied","Data":"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.463476 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerStarted","Data":"b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.467997 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerStarted","Data":"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.470016 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerStarted","Data":"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.472333 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerStarted","Data":"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.484036 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c28tv" podStartSLOduration=2.4129747200000002 podStartE2EDuration="36.484013898s" podCreationTimestamp="2026-02-18 19:19:49 +0000 UTC" firstStartedPulling="2026-02-18 19:19:51.052080028 +0000 UTC m=+150.757012693" lastFinishedPulling="2026-02-18 19:20:25.123119196 +0000 UTC m=+184.828051871" observedRunningTime="2026-02-18 19:20:25.483182965 +0000 UTC m=+185.188115630" watchObservedRunningTime="2026-02-18 19:20:25.484013898 +0000 UTC m=+185.188946563" Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.500039 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5jvhl" podStartSLOduration=2.458009175 podStartE2EDuration="33.500020979s" podCreationTimestamp="2026-02-18 19:19:52 +0000 UTC" firstStartedPulling="2026-02-18 19:19:54.198024865 +0000 UTC m=+153.902957530" lastFinishedPulling="2026-02-18 19:20:25.240036669 +0000 UTC m=+184.944969334" observedRunningTime="2026-02-18 19:20:25.498398715 +0000 UTC m=+185.203331380" watchObservedRunningTime="2026-02-18 19:20:25.500020979 +0000 UTC m=+185.204953644" Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.517576 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tm22r" podStartSLOduration=3.618653725 podStartE2EDuration="37.517544512s" podCreationTimestamp="2026-02-18 19:19:48 +0000 UTC" firstStartedPulling="2026-02-18 19:19:51.025906872 +0000 UTC m=+150.730839537" lastFinishedPulling="2026-02-18 19:20:24.924797659 +0000 UTC m=+184.629730324" observedRunningTime="2026-02-18 19:20:25.516617897 +0000 UTC m=+185.221550562" watchObservedRunningTime="2026-02-18 19:20:25.517544512 +0000 UTC m=+185.222477187" Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.533752 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5dn7d" podStartSLOduration=2.6751583070000002 podStartE2EDuration="34.533732758s" podCreationTimestamp="2026-02-18 19:19:51 +0000 UTC" firstStartedPulling="2026-02-18 19:19:53.165031542 +0000 UTC m=+152.869964207" lastFinishedPulling="2026-02-18 19:20:25.023605993 +0000 UTC m=+184.728538658" observedRunningTime="2026-02-18 19:20:25.529315419 +0000 UTC m=+185.234248084" watchObservedRunningTime="2026-02-18 19:20:25.533732758 +0000 UTC m=+185.238665413" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.295584 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.297812 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.483866 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:20:29 crc kubenswrapper[4942]: E0218 19:20:29.484124 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d362dd3-7195-4c71-9a1c-b4170b339f6d" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484136 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d362dd3-7195-4c71-9a1c-b4170b339f6d" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: E0218 19:20:29.484144 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aba94fa-2207-4cae-8a64-536109c9c967" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484149 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aba94fa-2207-4cae-8a64-536109c9c967" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484250 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aba94fa-2207-4cae-8a64-536109c9c967" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484260 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d362dd3-7195-4c71-9a1c-b4170b339f6d" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484691 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.487993 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.489024 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.493689 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.552051 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.582777 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.582844 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.684201 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.684369 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.684473 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.703883 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.724161 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.724242 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.776474 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.817412 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:30 crc kubenswrapper[4942]: I0218 19:20:30.100906 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:30 crc kubenswrapper[4942]: I0218 19:20:30.278949 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:20:30 crc kubenswrapper[4942]: I0218 19:20:30.500179 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bc416be-42a9-48cf-842d-51c1dcf886ad","Type":"ContainerStarted","Data":"f23f76c49f2f8dfecc25ad98e9df225a57a759b2c8f14b46c81c3a2c628841ef"} Feb 18 19:20:30 crc kubenswrapper[4942]: I0218 19:20:30.555625 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:31 crc kubenswrapper[4942]: I0218 19:20:31.516682 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bc416be-42a9-48cf-842d-51c1dcf886ad","Type":"ContainerStarted","Data":"73473b2188b5bacea6130af5e2e141bdd79f2eb606c80044881381f2bc22846e"} Feb 18 19:20:31 crc kubenswrapper[4942]: I0218 19:20:31.566065 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.298747 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.298838 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.355681 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.558098 4942 generic.go:334] "Generic (PLEG): container finished" podID="4bc416be-42a9-48cf-842d-51c1dcf886ad" containerID="73473b2188b5bacea6130af5e2e141bdd79f2eb606c80044881381f2bc22846e" exitCode=0 Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.558272 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bc416be-42a9-48cf-842d-51c1dcf886ad","Type":"ContainerDied","Data":"73473b2188b5bacea6130af5e2e141bdd79f2eb606c80044881381f2bc22846e"} Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.558574 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c28tv" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="registry-server" containerID="cri-o://b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980" gracePeriod=2 Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.603154 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.790385 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.791597 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.849343 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.571487 4942 generic.go:334] "Generic (PLEG): container finished" podID="02b9174b-0251-447d-8266-56e92f6e9be1" containerID="b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980" exitCode=0 Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.572210 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerDied","Data":"b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980"} Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.666366 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.803802 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.883324 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.969314 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") pod \"02b9174b-0251-447d-8266-56e92f6e9be1\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.969434 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") pod \"02b9174b-0251-447d-8266-56e92f6e9be1\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.969525 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") pod \"02b9174b-0251-447d-8266-56e92f6e9be1\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.970569 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities" (OuterVolumeSpecName: "utilities") pod "02b9174b-0251-447d-8266-56e92f6e9be1" (UID: "02b9174b-0251-447d-8266-56e92f6e9be1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.979099 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2" (OuterVolumeSpecName: "kube-api-access-rjfn2") pod "02b9174b-0251-447d-8266-56e92f6e9be1" (UID: "02b9174b-0251-447d-8266-56e92f6e9be1"). InnerVolumeSpecName "kube-api-access-rjfn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.028755 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02b9174b-0251-447d-8266-56e92f6e9be1" (UID: "02b9174b-0251-447d-8266-56e92f6e9be1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.070690 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") pod \"4bc416be-42a9-48cf-842d-51c1dcf886ad\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.070831 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4bc416be-42a9-48cf-842d-51c1dcf886ad" (UID: "4bc416be-42a9-48cf-842d-51c1dcf886ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.070865 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") pod \"4bc416be-42a9-48cf-842d-51c1dcf886ad\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.071629 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.071661 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.071676 4942 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.071741 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.074913 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4bc416be-42a9-48cf-842d-51c1dcf886ad" (UID: "4bc416be-42a9-48cf-842d-51c1dcf886ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.173352 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.578620 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerDied","Data":"8c6af65ee9862a635b1667bd69dde2c1cdffc885f9052d205608bd240b148144"} Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.578689 4942 scope.go:117] "RemoveContainer" containerID="b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.578643 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.581224 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bc416be-42a9-48cf-842d-51c1dcf886ad","Type":"ContainerDied","Data":"f23f76c49f2f8dfecc25ad98e9df225a57a759b2c8f14b46c81c3a2c628841ef"} Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.581315 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f23f76c49f2f8dfecc25ad98e9df225a57a759b2c8f14b46c81c3a2c628841ef" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.581491 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.602957 4942 scope.go:117] "RemoveContainer" containerID="31a34a3009919984a90fc292e1925ef6a14bfec470e5201664bf267723f6d086" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.613983 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.618604 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.641794 4942 scope.go:117] "RemoveContainer" containerID="01a84768c2d4f4eb7b1180f3d4ce6ea22f3b2fc585b9417ed7bc6475cacbd4a4" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.761675 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:20:35 crc kubenswrapper[4942]: I0218 19:20:35.044042 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" path="/var/lib/kubelet/pods/02b9174b-0251-447d-8266-56e92f6e9be1/volumes" Feb 18 19:20:36 crc kubenswrapper[4942]: I0218 19:20:36.601826 4942 generic.go:334] "Generic (PLEG): container finished" podID="934bc032-4641-47ee-9689-39edb4e5a24a" containerID="3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141" exitCode=0 Feb 18 19:20:36 crc kubenswrapper[4942]: I0218 19:20:36.601946 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerDied","Data":"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141"} Feb 18 19:20:36 crc kubenswrapper[4942]: I0218 19:20:36.602512 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5jvhl" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="registry-server" containerID="cri-o://8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" gracePeriod=2 Feb 18 19:20:36 crc kubenswrapper[4942]: I0218 19:20:36.977109 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.109534 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") pod \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.109889 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") pod \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.109943 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") pod \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.110671 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities" (OuterVolumeSpecName: "utilities") pod "44e2bd42-1f23-4563-a1d2-7765ab9181f6" (UID: "44e2bd42-1f23-4563-a1d2-7765ab9181f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.115547 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8" (OuterVolumeSpecName: "kube-api-access-tr6h8") pod "44e2bd42-1f23-4563-a1d2-7765ab9181f6" (UID: "44e2bd42-1f23-4563-a1d2-7765ab9181f6"). InnerVolumeSpecName "kube-api-access-tr6h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.211356 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.211401 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.238246 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44e2bd42-1f23-4563-a1d2-7765ab9181f6" (UID: "44e2bd42-1f23-4563-a1d2-7765ab9181f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.312911 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.613714 4942 generic.go:334] "Generic (PLEG): container finished" podID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerID="c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff" exitCode=0 Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.613869 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerDied","Data":"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff"} Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619728 4942 generic.go:334] "Generic (PLEG): container finished" podID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerID="8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" exitCode=0 Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619810 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerDied","Data":"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4"} Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619831 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619868 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerDied","Data":"076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839"} Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619892 4942 scope.go:117] "RemoveContainer" containerID="8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.621936 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerStarted","Data":"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1"} Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.640670 4942 scope.go:117] "RemoveContainer" containerID="798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.656179 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tk5v7" podStartSLOduration=2.699569652 podStartE2EDuration="48.656150933s" podCreationTimestamp="2026-02-18 19:19:49 +0000 UTC" firstStartedPulling="2026-02-18 19:19:51.071874792 +0000 UTC m=+150.776807457" lastFinishedPulling="2026-02-18 19:20:37.028456073 +0000 UTC m=+196.733388738" observedRunningTime="2026-02-18 19:20:37.655812723 +0000 UTC m=+197.360745398" watchObservedRunningTime="2026-02-18 19:20:37.656150933 +0000 UTC m=+197.361083598" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.664378 4942 scope.go:117] "RemoveContainer" containerID="9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.678001 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.679029 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684268 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684527 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684545 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684556 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="extract-content" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684563 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="extract-content" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684574 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="extract-utilities" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684581 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="extract-utilities" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684591 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684597 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684607 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="extract-utilities" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684613 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="extract-utilities" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684622 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="extract-content" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684628 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="extract-content" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684635 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc416be-42a9-48cf-842d-51c1dcf886ad" containerName="pruner" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684641 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc416be-42a9-48cf-842d-51c1dcf886ad" containerName="pruner" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684736 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684745 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684777 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc416be-42a9-48cf-842d-51c1dcf886ad" containerName="pruner" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.685184 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.688284 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.688442 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.691478 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.701345 4942 scope.go:117] "RemoveContainer" containerID="8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.707298 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4\": container with ID starting with 8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4 not found: ID does not exist" containerID="8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.707690 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4"} err="failed to get container status \"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4\": rpc error: code = NotFound desc = could not find container \"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4\": container with ID starting with 8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4 not found: ID does not exist" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.707788 4942 scope.go:117] "RemoveContainer" containerID="798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.708087 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65\": container with ID starting with 798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65 not found: ID does not exist" containerID="798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.708108 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65"} err="failed to get container status \"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65\": rpc error: code = NotFound desc = could not find container \"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65\": container with ID starting with 798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65 not found: ID does not exist" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.708126 4942 scope.go:117] "RemoveContainer" containerID="9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.708339 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74\": container with ID starting with 9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74 not found: ID does not exist" containerID="9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.708361 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74"} err="failed to get container status \"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74\": rpc error: code = NotFound desc = could not find container \"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74\": container with ID starting with 9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74 not found: ID does not exist" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.818518 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.818584 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.818729 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920134 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920191 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920212 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920316 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.939337 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.999381 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.190521 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:20:38 crc kubenswrapper[4942]: W0218 19:20:38.199164 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeea7d003_0909_4006_b81d_e566f256b0aa.slice/crio-239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e WatchSource:0}: Error finding container 239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e: Status 404 returned error can't find the container with id 239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.630548 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerStarted","Data":"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.632296 4942 generic.go:334] "Generic (PLEG): container finished" podID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerID="6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09" exitCode=0 Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.632366 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerDied","Data":"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.634977 4942 generic.go:334] "Generic (PLEG): container finished" podID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerID="fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc" exitCode=0 Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.635023 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerDied","Data":"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.641601 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eea7d003-0909-4006-b81d-e566f256b0aa","Type":"ContainerStarted","Data":"cd03ae906b7bee058d56cb0846d1b0e67c0721c950835150412c01f8c34159f0"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.641799 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eea7d003-0909-4006-b81d-e566f256b0aa","Type":"ContainerStarted","Data":"239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.679198 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjnbk" podStartSLOduration=3.685763764 podStartE2EDuration="50.679177156s" podCreationTimestamp="2026-02-18 19:19:48 +0000 UTC" firstStartedPulling="2026-02-18 19:19:51.058944893 +0000 UTC m=+150.763877558" lastFinishedPulling="2026-02-18 19:20:38.052358285 +0000 UTC m=+197.757290950" observedRunningTime="2026-02-18 19:20:38.653073356 +0000 UTC m=+198.358006021" watchObservedRunningTime="2026-02-18 19:20:38.679177156 +0000 UTC m=+198.384109821" Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.722256 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.722229636 podStartE2EDuration="1.722229636s" podCreationTimestamp="2026-02-18 19:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:38.721927787 +0000 UTC m=+198.426860452" watchObservedRunningTime="2026-02-18 19:20:38.722229636 +0000 UTC m=+198.427162301" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.055657 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" path="/var/lib/kubelet/pods/44e2bd42-1f23-4563-a1d2-7765ab9181f6/volumes" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.336907 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.397211 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.397266 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.664421 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerStarted","Data":"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87"} Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.668199 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerStarted","Data":"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544"} Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.686551 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vrlpg" podStartSLOduration=2.639806826 podStartE2EDuration="49.686528534s" podCreationTimestamp="2026-02-18 19:19:50 +0000 UTC" firstStartedPulling="2026-02-18 19:19:52.116175591 +0000 UTC m=+151.821108256" lastFinishedPulling="2026-02-18 19:20:39.162897309 +0000 UTC m=+198.867829964" observedRunningTime="2026-02-18 19:20:39.683496788 +0000 UTC m=+199.388429463" watchObservedRunningTime="2026-02-18 19:20:39.686528534 +0000 UTC m=+199.391461199" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.699587 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w75d5" podStartSLOduration=2.845630752 podStartE2EDuration="48.699563904s" podCreationTimestamp="2026-02-18 19:19:51 +0000 UTC" firstStartedPulling="2026-02-18 19:19:53.169210515 +0000 UTC m=+152.874143180" lastFinishedPulling="2026-02-18 19:20:39.023143657 +0000 UTC m=+198.728076332" observedRunningTime="2026-02-18 19:20:39.698329689 +0000 UTC m=+199.403262354" watchObservedRunningTime="2026-02-18 19:20:39.699563904 +0000 UTC m=+199.404496569" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.795866 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.795926 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.870033 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:40 crc kubenswrapper[4942]: I0218 19:20:40.438319 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gjnbk" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" probeResult="failure" output=< Feb 18 19:20:40 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:20:40 crc kubenswrapper[4942]: > Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.283275 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.283542 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.321819 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.825515 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.825583 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.876379 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:48 crc kubenswrapper[4942]: I0218 19:20:48.817957 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" containerID="cri-o://00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" gracePeriod=15 Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.458777 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.509121 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.716549 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.747189 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-26rlh"] Feb 18 19:20:49 crc kubenswrapper[4942]: E0218 19:20:49.747465 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.747479 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.747607 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748073 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748249 4942 generic.go:334] "Generic (PLEG): container finished" podID="42dda107-038c-42c1-8182-52bee75caea9" containerID="00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" exitCode=0 Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748456 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" event={"ID":"42dda107-038c-42c1-8182-52bee75caea9","Type":"ContainerDied","Data":"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0"} Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748500 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" event={"ID":"42dda107-038c-42c1-8182-52bee75caea9","Type":"ContainerDied","Data":"549a45966f3465b915ee762043425f7fc34d780e5d763266b632f538fe2cd88e"} Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748520 4942 scope.go:117] "RemoveContainer" containerID="00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748523 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.766385 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-26rlh"] Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.772493 4942 scope.go:117] "RemoveContainer" containerID="00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" Feb 18 19:20:49 crc kubenswrapper[4942]: E0218 19:20:49.784299 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0\": container with ID starting with 00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0 not found: ID does not exist" containerID="00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.784356 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0"} err="failed to get container status \"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0\": rpc error: code = NotFound desc = could not find container \"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0\": container with ID starting with 00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0 not found: ID does not exist" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806568 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806686 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806718 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806752 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806794 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806873 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806920 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806948 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806975 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806997 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807045 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807073 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807130 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807162 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807353 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807385 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807406 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807447 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807481 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807504 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-dir\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807542 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-policies\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807599 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807634 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807661 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807686 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807713 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807748 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lgn\" (UniqueName: \"kubernetes.io/projected/78f383f9-664c-43eb-9253-d9df1eaa9716-kube-api-access-x8lgn\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807802 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.810387 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.810972 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.811313 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.812260 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.817735 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.817792 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.819468 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89" (OuterVolumeSpecName: "kube-api-access-2wh89") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "kube-api-access-2wh89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.820370 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.831867 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.832518 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.836010 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.839937 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.842241 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.847566 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.862535 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908678 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908732 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908749 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-dir\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908787 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-policies\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908807 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908865 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908884 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908910 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8lgn\" (UniqueName: \"kubernetes.io/projected/78f383f9-664c-43eb-9253-d9df1eaa9716-kube-api-access-x8lgn\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908932 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908974 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908985 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908994 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909004 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909015 4942 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909025 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909036 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909046 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909055 4942 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909063 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909072 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909080 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909091 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909099 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909847 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-dir\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.910055 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.910585 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.910870 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-policies\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.910888 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913292 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913321 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913419 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913783 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.914178 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.915024 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.915270 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.925594 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8lgn\" (UniqueName: \"kubernetes.io/projected/78f383f9-664c-43eb-9253-d9df1eaa9716-kube-api-access-x8lgn\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.078543 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.104851 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.109161 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.378824 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-26rlh"] Feb 18 19:20:50 crc kubenswrapper[4942]: W0218 19:20:50.398660 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78f383f9_664c_43eb_9253_d9df1eaa9716.slice/crio-965e0d200427d08586902f78bf1baa491dbf59fa6e987cf6374a2c46d986a8a7 WatchSource:0}: Error finding container 965e0d200427d08586902f78bf1baa491dbf59fa6e987cf6374a2c46d986a8a7: Status 404 returned error can't find the container with id 965e0d200427d08586902f78bf1baa491dbf59fa6e987cf6374a2c46d986a8a7 Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.762689 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" event={"ID":"78f383f9-664c-43eb-9253-d9df1eaa9716","Type":"ContainerStarted","Data":"eee278d9dc862c887d60eff6183347bcd5020cd73bab83eb57f6af86c7f3f58a"} Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.763338 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.763367 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" event={"ID":"78f383f9-664c-43eb-9253-d9df1eaa9716","Type":"ContainerStarted","Data":"965e0d200427d08586902f78bf1baa491dbf59fa6e987cf6374a2c46d986a8a7"} Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.766129 4942 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-26rlh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.766204 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" podUID="78f383f9-664c-43eb-9253-d9df1eaa9716" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.794481 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" podStartSLOduration=27.794446709 podStartE2EDuration="27.794446709s" podCreationTimestamp="2026-02-18 19:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:50.790084885 +0000 UTC m=+210.495017610" watchObservedRunningTime="2026-02-18 19:20:50.794446709 +0000 UTC m=+210.499379414" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.047254 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42dda107-038c-42c1-8182-52bee75caea9" path="/var/lib/kubelet/pods/42dda107-038c-42c1-8182-52bee75caea9/volumes" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.353825 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.367105 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.367574 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tk5v7" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="registry-server" containerID="cri-o://451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" gracePeriod=2 Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.726616 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.755005 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") pod \"934bc032-4641-47ee-9689-39edb4e5a24a\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.755153 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") pod \"934bc032-4641-47ee-9689-39edb4e5a24a\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.755229 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") pod \"934bc032-4641-47ee-9689-39edb4e5a24a\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.757078 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities" (OuterVolumeSpecName: "utilities") pod "934bc032-4641-47ee-9689-39edb4e5a24a" (UID: "934bc032-4641-47ee-9689-39edb4e5a24a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.764648 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66" (OuterVolumeSpecName: "kube-api-access-98p66") pod "934bc032-4641-47ee-9689-39edb4e5a24a" (UID: "934bc032-4641-47ee-9689-39edb4e5a24a"). InnerVolumeSpecName "kube-api-access-98p66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775718 4942 generic.go:334] "Generic (PLEG): container finished" podID="934bc032-4641-47ee-9689-39edb4e5a24a" containerID="451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" exitCode=0 Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775831 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerDied","Data":"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1"} Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775884 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerDied","Data":"5c780f3eaf3a7663544d07b41d9cc753cd4008f1802dbe09d0227e582dd487c7"} Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775906 4942 scope.go:117] "RemoveContainer" containerID="451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775842 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.783343 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.796022 4942 scope.go:117] "RemoveContainer" containerID="3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.817275 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "934bc032-4641-47ee-9689-39edb4e5a24a" (UID: "934bc032-4641-47ee-9689-39edb4e5a24a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.841889 4942 scope.go:117] "RemoveContainer" containerID="7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.860754 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.860808 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.860818 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.876541 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.879493 4942 scope.go:117] "RemoveContainer" containerID="451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" Feb 18 19:20:51 crc kubenswrapper[4942]: E0218 19:20:51.881400 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1\": container with ID starting with 451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1 not found: ID does not exist" containerID="451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.881460 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1"} err="failed to get container status \"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1\": rpc error: code = NotFound desc = could not find container \"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1\": container with ID starting with 451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1 not found: ID does not exist" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.881489 4942 scope.go:117] "RemoveContainer" containerID="3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141" Feb 18 19:20:51 crc kubenswrapper[4942]: E0218 19:20:51.881953 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141\": container with ID starting with 3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141 not found: ID does not exist" containerID="3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.882012 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141"} err="failed to get container status \"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141\": rpc error: code = NotFound desc = could not find container \"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141\": container with ID starting with 3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141 not found: ID does not exist" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.882045 4942 scope.go:117] "RemoveContainer" containerID="7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7" Feb 18 19:20:51 crc kubenswrapper[4942]: E0218 19:20:51.882458 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7\": container with ID starting with 7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7 not found: ID does not exist" containerID="7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.882490 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7"} err="failed to get container status \"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7\": rpc error: code = NotFound desc = could not find container \"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7\": container with ID starting with 7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7 not found: ID does not exist" Feb 18 19:20:52 crc kubenswrapper[4942]: I0218 19:20:52.101122 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:20:52 crc kubenswrapper[4942]: I0218 19:20:52.103946 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.047993 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" path="/var/lib/kubelet/pods/934bc032-4641-47ee-9689-39edb4e5a24a/volumes" Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.742550 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.742663 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.742745 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.743838 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.743972 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1" gracePeriod=600 Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.768741 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.769176 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w75d5" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="registry-server" containerID="cri-o://d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" gracePeriod=2 Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.188221 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.199433 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") pod \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.199559 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") pod \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.199635 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") pod \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.202865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities" (OuterVolumeSpecName: "utilities") pod "f8dc55ee-28aa-4789-96c1-0809c7abdc99" (UID: "f8dc55ee-28aa-4789-96c1-0809c7abdc99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.206213 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht" (OuterVolumeSpecName: "kube-api-access-db9ht") pod "f8dc55ee-28aa-4789-96c1-0809c7abdc99" (UID: "f8dc55ee-28aa-4789-96c1-0809c7abdc99"). InnerVolumeSpecName "kube-api-access-db9ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.228480 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8dc55ee-28aa-4789-96c1-0809c7abdc99" (UID: "f8dc55ee-28aa-4789-96c1-0809c7abdc99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.300454 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.300519 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.300539 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.804328 4942 generic.go:334] "Generic (PLEG): container finished" podID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerID="d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" exitCode=0 Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.804424 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.804434 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerDied","Data":"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544"} Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.804996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerDied","Data":"bf827b49c615857eb54c5c1b4eb25133056e0a9065497fbb34a9215010ac6e9f"} Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.805035 4942 scope.go:117] "RemoveContainer" containerID="d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.808747 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1" exitCode=0 Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.808793 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1"} Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.808875 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06"} Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.839906 4942 scope.go:117] "RemoveContainer" containerID="fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.859473 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.862740 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.870517 4942 scope.go:117] "RemoveContainer" containerID="b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.890004 4942 scope.go:117] "RemoveContainer" containerID="d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" Feb 18 19:20:54 crc kubenswrapper[4942]: E0218 19:20:54.890563 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544\": container with ID starting with d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544 not found: ID does not exist" containerID="d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.890619 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544"} err="failed to get container status \"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544\": rpc error: code = NotFound desc = could not find container \"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544\": container with ID starting with d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544 not found: ID does not exist" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.890658 4942 scope.go:117] "RemoveContainer" containerID="fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc" Feb 18 19:20:54 crc kubenswrapper[4942]: E0218 19:20:54.891325 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc\": container with ID starting with fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc not found: ID does not exist" containerID="fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.891356 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc"} err="failed to get container status \"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc\": rpc error: code = NotFound desc = could not find container \"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc\": container with ID starting with fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc not found: ID does not exist" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.891376 4942 scope.go:117] "RemoveContainer" containerID="b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd" Feb 18 19:20:54 crc kubenswrapper[4942]: E0218 19:20:54.891863 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd\": container with ID starting with b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd not found: ID does not exist" containerID="b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.891928 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd"} err="failed to get container status \"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd\": rpc error: code = NotFound desc = could not find container \"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd\": container with ID starting with b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd not found: ID does not exist" Feb 18 19:20:55 crc kubenswrapper[4942]: I0218 19:20:55.047959 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" path="/var/lib/kubelet/pods/f8dc55ee-28aa-4789-96c1-0809c7abdc99/volumes" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.191948 4942 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193050 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="extract-content" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193073 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="extract-content" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193093 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193106 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193130 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193144 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193167 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="extract-utilities" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193180 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="extract-utilities" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193200 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="extract-content" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193212 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="extract-content" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193232 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="extract-utilities" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193245 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="extract-utilities" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193414 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193433 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193996 4942 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194036 4942 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194155 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194438 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194451 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194497 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194520 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194446 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194752 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194779 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194789 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194794 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194800 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194806 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194814 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194820 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194830 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194835 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194841 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194847 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194854 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194860 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194961 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194974 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194985 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194994 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.195002 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.195012 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.195094 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.195100 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.195222 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.198795 4942 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.275090 4942 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324168 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324221 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324257 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324274 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324472 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324564 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324705 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.425778 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426108 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.425899 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426157 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426195 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426216 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426237 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426251 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426295 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426258 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426355 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426390 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426442 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426491 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.575835 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.603534 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956d8c05eebe81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,LastTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.946127 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.947984 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948747 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" exitCode=0 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948786 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" exitCode=0 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948794 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" exitCode=0 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948801 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" exitCode=2 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948800 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.954292 4942 generic.go:334] "Generic (PLEG): container finished" podID="eea7d003-0909-4006-b81d-e566f256b0aa" containerID="cd03ae906b7bee058d56cb0846d1b0e67c0721c950835150412c01f8c34159f0" exitCode=0 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.954346 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eea7d003-0909-4006-b81d-e566f256b0aa","Type":"ContainerDied","Data":"cd03ae906b7bee058d56cb0846d1b0e67c0721c950835150412c01f8c34159f0"} Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.954987 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.955923 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"17d52aa652e2262a448752f8eeedf1ade032558596806a1871b71588f0f54812"} Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.955984 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"45e67579ac13322a7f5886f560eaf4d5f854a9c9c1fd56d9f69639efc91d0d7f"} Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.956444 4942 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.956474 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:17 crc kubenswrapper[4942]: I0218 19:21:17.965854 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.223462 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.224725 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249278 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") pod \"eea7d003-0909-4006-b81d-e566f256b0aa\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249332 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") pod \"eea7d003-0909-4006-b81d-e566f256b0aa\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249380 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") pod \"eea7d003-0909-4006-b81d-e566f256b0aa\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249593 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eea7d003-0909-4006-b81d-e566f256b0aa" (UID: "eea7d003-0909-4006-b81d-e566f256b0aa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249621 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock" (OuterVolumeSpecName: "var-lock") pod "eea7d003-0909-4006-b81d-e566f256b0aa" (UID: "eea7d003-0909-4006-b81d-e566f256b0aa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.260120 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eea7d003-0909-4006-b81d-e566f256b0aa" (UID: "eea7d003-0909-4006-b81d-e566f256b0aa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.350733 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.350789 4942 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.350801 4942 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.549245 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.550021 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.550669 4942 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551262 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551543 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551586 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551640 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551726 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551941 4942 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551965 4942 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.653213 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.653323 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.654134 4942 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: E0218 19:21:18.767356 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956d8c05eebe81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,LastTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.980695 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.981564 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" exitCode=0 Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.981667 4942 scope.go:117] "RemoveContainer" containerID="c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.981727 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.983976 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eea7d003-0909-4006-b81d-e566f256b0aa","Type":"ContainerDied","Data":"239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e"} Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.984015 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.984085 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.005226 4942 scope.go:117] "RemoveContainer" containerID="5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.006241 4942 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.007902 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.008661 4942 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.009172 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.025161 4942 scope.go:117] "RemoveContainer" containerID="ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.039780 4942 scope.go:117] "RemoveContainer" containerID="ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.042524 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.064831 4942 scope.go:117] "RemoveContainer" containerID="beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.080603 4942 scope.go:117] "RemoveContainer" containerID="6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.105669 4942 scope.go:117] "RemoveContainer" containerID="c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.107362 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\": container with ID starting with c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3 not found: ID does not exist" containerID="c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.107428 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3"} err="failed to get container status \"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\": rpc error: code = NotFound desc = could not find container \"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\": container with ID starting with c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3 not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.107469 4942 scope.go:117] "RemoveContainer" containerID="5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.107851 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\": container with ID starting with 5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88 not found: ID does not exist" containerID="5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.107895 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88"} err="failed to get container status \"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\": rpc error: code = NotFound desc = could not find container \"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\": container with ID starting with 5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88 not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.107919 4942 scope.go:117] "RemoveContainer" containerID="ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.109009 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\": container with ID starting with ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d not found: ID does not exist" containerID="ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109039 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d"} err="failed to get container status \"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\": rpc error: code = NotFound desc = could not find container \"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\": container with ID starting with ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109055 4942 scope.go:117] "RemoveContainer" containerID="ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.109454 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\": container with ID starting with ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954 not found: ID does not exist" containerID="ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109486 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954"} err="failed to get container status \"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\": rpc error: code = NotFound desc = could not find container \"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\": container with ID starting with ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954 not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109501 4942 scope.go:117] "RemoveContainer" containerID="beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.109840 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\": container with ID starting with beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383 not found: ID does not exist" containerID="beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109882 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383"} err="failed to get container status \"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\": rpc error: code = NotFound desc = could not find container \"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\": container with ID starting with beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383 not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109916 4942 scope.go:117] "RemoveContainer" containerID="6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.110258 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\": container with ID starting with 6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09 not found: ID does not exist" containerID="6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.110283 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09"} err="failed to get container status \"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\": rpc error: code = NotFound desc = could not find container \"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\": container with ID starting with 6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09 not found: ID does not exist" Feb 18 19:21:21 crc kubenswrapper[4942]: I0218 19:21:21.041898 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.700433 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.701307 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.701691 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.702154 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.702534 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: I0218 19:21:21.702573 4942 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.702962 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="200ms" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.904404 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="400ms" Feb 18 19:21:22 crc kubenswrapper[4942]: E0218 19:21:22.305379 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="800ms" Feb 18 19:21:23 crc kubenswrapper[4942]: E0218 19:21:23.106496 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="1.6s" Feb 18 19:21:24 crc kubenswrapper[4942]: E0218 19:21:24.707539 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="3.2s" Feb 18 19:21:27 crc kubenswrapper[4942]: E0218 19:21:27.908586 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="6.4s" Feb 18 19:21:28 crc kubenswrapper[4942]: E0218 19:21:28.769012 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956d8c05eebe81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,LastTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.035689 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.036624 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.067854 4942 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.068119 4942 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:29 crc kubenswrapper[4942]: E0218 19:21:29.068716 4942 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.069665 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.066390 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.067811 4942 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4" exitCode=1 Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.067878 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4"} Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.068922 4942 scope.go:117] "RemoveContainer" containerID="a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.069752 4942 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.070373 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.074321 4942 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fa1703555cb3fb7f21a9e0d57be7e2d37dad00a8ff5e00e3a584823e82a9a71d" exitCode=0 Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.074399 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fa1703555cb3fb7f21a9e0d57be7e2d37dad00a8ff5e00e3a584823e82a9a71d"} Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.074957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b0c3eddb68e1ad1a8b897f6bd0279d49852d50f8b285c09b376df99296f4d9ba"} Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.075844 4942 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.075924 4942 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.076365 4942 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:30 crc kubenswrapper[4942]: E0218 19:21:30.076801 4942 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.076997 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.100149 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.100271 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5036f7403b191694066ef320028a2bf55bd13b329a0f1d42f1a10a59b7bac1be"} Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.105941 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"253d75b2ceadf269e8eebde80a5116a461b9c35dff7feb969427e61d705af5ee"} Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.105979 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"35e1e6e82d1c193dc851eefeee988519e4dad0c8f4a376471c152d12d878218a"} Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.105989 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eaa409a84e6532023beb10a5eba80842788739fb818b83a09abcf1c01f9f8972"} Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.115630 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e66ecb43a8be01a3e34f28aec8f5fa7100a9b6199018b4b430a8b5563769ffeb"} Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.116024 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"54c0a911eba9a8d427b880b3104dc6abd434c843d2d34b687f30414cbd53f687"} Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.116080 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.116121 4942 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.116148 4942 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.070218 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.070820 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.079973 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.714567 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.720790 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:21:35 crc kubenswrapper[4942]: I0218 19:21:35.131621 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:21:37 crc kubenswrapper[4942]: I0218 19:21:37.131123 4942 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:37 crc kubenswrapper[4942]: I0218 19:21:37.191372 4942 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="50238630-c949-4324-b183-d0cf16046628" Feb 18 19:21:38 crc kubenswrapper[4942]: I0218 19:21:38.153596 4942 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:38 crc kubenswrapper[4942]: I0218 19:21:38.153629 4942 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:38 crc kubenswrapper[4942]: I0218 19:21:38.158873 4942 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="50238630-c949-4324-b183-d0cf16046628" Feb 18 19:21:46 crc kubenswrapper[4942]: I0218 19:21:46.677444 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:21:46 crc kubenswrapper[4942]: I0218 19:21:46.791262 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:21:47 crc kubenswrapper[4942]: I0218 19:21:47.070470 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:21:47 crc kubenswrapper[4942]: I0218 19:21:47.465201 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:21:47 crc kubenswrapper[4942]: I0218 19:21:47.634850 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:21:47 crc kubenswrapper[4942]: I0218 19:21:47.853869 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.215437 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.345992 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.445196 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.526855 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.740926 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.814994 4942 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.210496 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.460083 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.616311 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.740970 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.761251 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.777017 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.874938 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.961532 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.991265 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.126160 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.150928 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.198094 4942 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.253875 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.417922 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.489551 4942 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.497478 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.497568 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.497598 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb","openshift-marketplace/redhat-operators-5dn7d","openshift-marketplace/certified-operators-tm22r","openshift-marketplace/community-operators-gjnbk","openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.498015 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vrlpg" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="registry-server" containerID="cri-o://0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.498600 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjnbk" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" containerID="cri-o://c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.498838 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5dn7d" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="registry-server" containerID="cri-o://5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.499250 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tm22r" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="registry-server" containerID="cri-o://83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.500403 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" containerID="cri-o://1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.502974 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.561033 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.56101556 podStartE2EDuration="13.56101556s" podCreationTimestamp="2026-02-18 19:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:21:50.557537691 +0000 UTC m=+270.262470406" watchObservedRunningTime="2026-02-18 19:21:50.56101556 +0000 UTC m=+270.265948235" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.639548 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.856824 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.875298 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.905064 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.967344 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") pod \"fc54a822-e044-4d85-a0a8-499a79d09aaf\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.967403 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") pod \"fc54a822-e044-4d85-a0a8-499a79d09aaf\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.967436 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") pod \"fc54a822-e044-4d85-a0a8-499a79d09aaf\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.968326 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.968691 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities" (OuterVolumeSpecName: "utilities") pod "fc54a822-e044-4d85-a0a8-499a79d09aaf" (UID: "fc54a822-e044-4d85-a0a8-499a79d09aaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.975023 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.978408 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.978657 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8" (OuterVolumeSpecName: "kube-api-access-bjnb8") pod "fc54a822-e044-4d85-a0a8-499a79d09aaf" (UID: "fc54a822-e044-4d85-a0a8-499a79d09aaf"). InnerVolumeSpecName "kube-api-access-bjnb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.983030 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.046014 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068705 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") pod \"9b0511d8-736f-48fa-94a5-9a45e8482467\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068746 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") pod \"efab374b-fec3-4b4e-81f1-002715812a67\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068804 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") pod \"a7f05662-6e61-4d86-8a52-13000d4bd2be\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068820 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") pod \"9b0511d8-736f-48fa-94a5-9a45e8482467\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068836 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") pod \"a7f05662-6e61-4d86-8a52-13000d4bd2be\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068979 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068989 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.070285 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities" (OuterVolumeSpecName: "utilities") pod "a7f05662-6e61-4d86-8a52-13000d4bd2be" (UID: "a7f05662-6e61-4d86-8a52-13000d4bd2be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.071493 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296" (OuterVolumeSpecName: "kube-api-access-89296") pod "a7f05662-6e61-4d86-8a52-13000d4bd2be" (UID: "a7f05662-6e61-4d86-8a52-13000d4bd2be"). InnerVolumeSpecName "kube-api-access-89296". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.072693 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8" (OuterVolumeSpecName: "kube-api-access-lw4w8") pod "9b0511d8-736f-48fa-94a5-9a45e8482467" (UID: "9b0511d8-736f-48fa-94a5-9a45e8482467"). InnerVolumeSpecName "kube-api-access-lw4w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.074128 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "efab374b-fec3-4b4e-81f1-002715812a67" (UID: "efab374b-fec3-4b4e-81f1-002715812a67"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.108128 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc54a822-e044-4d85-a0a8-499a79d09aaf" (UID: "fc54a822-e044-4d85-a0a8-499a79d09aaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.141728 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b0511d8-736f-48fa-94a5-9a45e8482467" (UID: "9b0511d8-736f-48fa-94a5-9a45e8482467"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169386 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") pod \"efab374b-fec3-4b4e-81f1-002715812a67\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169440 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") pod \"07639322-4f8b-47d5-85c7-da678ca9eaf1\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169544 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") pod \"efab374b-fec3-4b4e-81f1-002715812a67\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169568 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") pod \"07639322-4f8b-47d5-85c7-da678ca9eaf1\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169613 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") pod \"07639322-4f8b-47d5-85c7-da678ca9eaf1\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169641 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") pod \"a7f05662-6e61-4d86-8a52-13000d4bd2be\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169671 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") pod \"9b0511d8-736f-48fa-94a5-9a45e8482467\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170323 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "efab374b-fec3-4b4e-81f1-002715812a67" (UID: "efab374b-fec3-4b4e-81f1-002715812a67"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170592 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities" (OuterVolumeSpecName: "utilities") pod "9b0511d8-736f-48fa-94a5-9a45e8482467" (UID: "9b0511d8-736f-48fa-94a5-9a45e8482467"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170819 4942 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170840 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170855 4942 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170913 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170928 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170940 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170953 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170964 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.171101 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities" (OuterVolumeSpecName: "utilities") pod "07639322-4f8b-47d5-85c7-da678ca9eaf1" (UID: "07639322-4f8b-47d5-85c7-da678ca9eaf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.173903 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr" (OuterVolumeSpecName: "kube-api-access-phjwr") pod "efab374b-fec3-4b4e-81f1-002715812a67" (UID: "efab374b-fec3-4b4e-81f1-002715812a67"). InnerVolumeSpecName "kube-api-access-phjwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.174318 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr" (OuterVolumeSpecName: "kube-api-access-h8vnr") pod "07639322-4f8b-47d5-85c7-da678ca9eaf1" (UID: "07639322-4f8b-47d5-85c7-da678ca9eaf1"). InnerVolumeSpecName "kube-api-access-h8vnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.208523 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07639322-4f8b-47d5-85c7-da678ca9eaf1" (UID: "07639322-4f8b-47d5-85c7-da678ca9eaf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.217224 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.228472 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.239058 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242734 4942 generic.go:334] "Generic (PLEG): container finished" podID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerID="c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242847 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerDied","Data":"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242869 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242880 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerDied","Data":"48be7d221e592c508e0024c55b4c7ad66329680b58e7532a74bd5a930a0ac4bd"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242899 4942 scope.go:117] "RemoveContainer" containerID="c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.246241 4942 generic.go:334] "Generic (PLEG): container finished" podID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerID="0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.246289 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerDied","Data":"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.246325 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerDied","Data":"1342033222b8b7017fedfcc1a993530ce3bb6c2c950b03c672270884763e7952"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.246865 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.254342 4942 generic.go:334] "Generic (PLEG): container finished" podID="efab374b-fec3-4b4e-81f1-002715812a67" containerID="1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.254393 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" event={"ID":"efab374b-fec3-4b4e-81f1-002715812a67","Type":"ContainerDied","Data":"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.254430 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" event={"ID":"efab374b-fec3-4b4e-81f1-002715812a67","Type":"ContainerDied","Data":"3c276811f364fb83706109331be8399abc2c7a535cfd237e4abe3dc07119fee5"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.254377 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.259826 4942 generic.go:334] "Generic (PLEG): container finished" podID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerID="83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.259881 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7f05662-6e61-4d86-8a52-13000d4bd2be" (UID: "a7f05662-6e61-4d86-8a52-13000d4bd2be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.259946 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.260206 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerDied","Data":"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.260248 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerDied","Data":"903844334b076d9d3fb48a98e733d182c6c0ea5de7f8aeb1362b7e203a4a8fa4"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.264454 4942 generic.go:334] "Generic (PLEG): container finished" podID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerID="5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.264493 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerDied","Data":"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.264512 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerDied","Data":"56923a9d84e1c384a4de3a0f2cac66f27ae78aee76d844588bcd57af55695ead"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.264568 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271748 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271826 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271849 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271864 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271880 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.309328 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.311783 4942 scope.go:117] "RemoveContainer" containerID="c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.311841 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.312574 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.328724 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.332047 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.337923 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.346850 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.354292 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.354562 4942 scope.go:117] "RemoveContainer" containerID="e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.360006 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.365602 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.375562 4942 scope.go:117] "RemoveContainer" containerID="c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.376538 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58\": container with ID starting with c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58 not found: ID does not exist" containerID="c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.376584 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58"} err="failed to get container status \"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58\": rpc error: code = NotFound desc = could not find container \"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58\": container with ID starting with c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.376609 4942 scope.go:117] "RemoveContainer" containerID="c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.377009 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff\": container with ID starting with c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff not found: ID does not exist" containerID="c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377038 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff"} err="failed to get container status \"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff\": rpc error: code = NotFound desc = could not find container \"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff\": container with ID starting with c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377056 4942 scope.go:117] "RemoveContainer" containerID="e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.377474 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4\": container with ID starting with e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4 not found: ID does not exist" containerID="e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377520 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4"} err="failed to get container status \"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4\": rpc error: code = NotFound desc = could not find container \"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4\": container with ID starting with e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377541 4942 scope.go:117] "RemoveContainer" containerID="0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377664 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.392574 4942 scope.go:117] "RemoveContainer" containerID="6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.414551 4942 scope.go:117] "RemoveContainer" containerID="656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.428750 4942 scope.go:117] "RemoveContainer" containerID="0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.429164 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87\": container with ID starting with 0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87 not found: ID does not exist" containerID="0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429195 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87"} err="failed to get container status \"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87\": rpc error: code = NotFound desc = could not find container \"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87\": container with ID starting with 0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429220 4942 scope.go:117] "RemoveContainer" containerID="6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.429544 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09\": container with ID starting with 6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09 not found: ID does not exist" containerID="6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429566 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09"} err="failed to get container status \"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09\": rpc error: code = NotFound desc = could not find container \"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09\": container with ID starting with 6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429580 4942 scope.go:117] "RemoveContainer" containerID="656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.429865 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a\": container with ID starting with 656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a not found: ID does not exist" containerID="656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429884 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a"} err="failed to get container status \"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a\": rpc error: code = NotFound desc = could not find container \"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a\": container with ID starting with 656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429897 4942 scope.go:117] "RemoveContainer" containerID="1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.443872 4942 scope.go:117] "RemoveContainer" containerID="1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.444130 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241\": container with ID starting with 1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241 not found: ID does not exist" containerID="1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.444153 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241"} err="failed to get container status \"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241\": rpc error: code = NotFound desc = could not find container \"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241\": container with ID starting with 1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.444170 4942 scope.go:117] "RemoveContainer" containerID="83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.461473 4942 scope.go:117] "RemoveContainer" containerID="a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.475194 4942 scope.go:117] "RemoveContainer" containerID="75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.488033 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.488391 4942 scope.go:117] "RemoveContainer" containerID="83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.488688 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b\": container with ID starting with 83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b not found: ID does not exist" containerID="83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.488715 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b"} err="failed to get container status \"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b\": rpc error: code = NotFound desc = could not find container \"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b\": container with ID starting with 83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.488739 4942 scope.go:117] "RemoveContainer" containerID="a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.489155 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848\": container with ID starting with a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848 not found: ID does not exist" containerID="a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.489186 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848"} err="failed to get container status \"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848\": rpc error: code = NotFound desc = could not find container \"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848\": container with ID starting with a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.489205 4942 scope.go:117] "RemoveContainer" containerID="75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.489431 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f\": container with ID starting with 75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f not found: ID does not exist" containerID="75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.489455 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f"} err="failed to get container status \"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f\": rpc error: code = NotFound desc = could not find container \"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f\": container with ID starting with 75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.489472 4942 scope.go:117] "RemoveContainer" containerID="5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.502422 4942 scope.go:117] "RemoveContainer" containerID="f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.519162 4942 scope.go:117] "RemoveContainer" containerID="45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.521909 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.541630 4942 scope.go:117] "RemoveContainer" containerID="5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.542148 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c\": container with ID starting with 5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c not found: ID does not exist" containerID="5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542179 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c"} err="failed to get container status \"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c\": rpc error: code = NotFound desc = could not find container \"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c\": container with ID starting with 5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542205 4942 scope.go:117] "RemoveContainer" containerID="f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.542451 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b\": container with ID starting with f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b not found: ID does not exist" containerID="f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542474 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b"} err="failed to get container status \"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b\": rpc error: code = NotFound desc = could not find container \"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b\": container with ID starting with f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542492 4942 scope.go:117] "RemoveContainer" containerID="45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.542720 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980\": container with ID starting with 45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980 not found: ID does not exist" containerID="45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542743 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980"} err="failed to get container status \"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980\": rpc error: code = NotFound desc = could not find container \"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980\": container with ID starting with 45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.568329 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.571350 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.572848 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.600696 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.765090 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.863165 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.993478 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.060828 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.064984 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.138517 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.158159 4942 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.172451 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.202268 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.233602 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.286892 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.327404 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.435374 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.486789 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.506591 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.586553 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.643436 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.840891 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.841075 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.947251 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.949004 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.018930 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.028159 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.042533 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" path="/var/lib/kubelet/pods/07639322-4f8b-47d5-85c7-da678ca9eaf1/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.043507 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" path="/var/lib/kubelet/pods/9b0511d8-736f-48fa-94a5-9a45e8482467/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.044205 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" path="/var/lib/kubelet/pods/a7f05662-6e61-4d86-8a52-13000d4bd2be/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.045471 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efab374b-fec3-4b4e-81f1-002715812a67" path="/var/lib/kubelet/pods/efab374b-fec3-4b4e-81f1-002715812a67/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.046014 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" path="/var/lib/kubelet/pods/fc54a822-e044-4d85-a0a8-499a79d09aaf/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.137488 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.217223 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.228424 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.263135 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.267373 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.440330 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.475395 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.567967 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.627252 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.648737 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.679932 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.794167 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.805595 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.875543 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.881583 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.882639 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.895996 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.936291 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.998887 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.031847 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.077485 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.082670 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.153582 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.154664 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.340400 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.427642 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.432614 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.537966 4942 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.554140 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.612113 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.620915 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.621868 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.735118 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.747140 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.819507 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.845393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.877431 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.927325 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.026397 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.055967 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.056385 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.140690 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.144904 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.149256 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.188752 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.198272 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.230210 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.357368 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.421034 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.511346 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.511653 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.634905 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.638579 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.655936 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.683717 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.829753 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.853937 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.930343 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.957086 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.968340 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.993010 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.026076 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.097815 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.106435 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.120714 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.125039 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.198395 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.255417 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.285649 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.331035 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.405344 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.431151 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.439384 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.510201 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.560538 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.603977 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.760306 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.808263 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.855919 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.001780 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.197845 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.202134 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.265975 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.339156 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.566969 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.569736 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.591810 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.707883 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.760548 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.762722 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.804321 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.814210 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839554 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7r96m"] Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.839854 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839885 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.839907 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839920 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.839937 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839949 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.839972 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839984 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840001 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840013 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840027 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" containerName="installer" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840039 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" containerName="installer" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840055 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840067 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840081 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840093 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840113 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840125 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840139 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840151 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840172 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840186 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840199 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840210 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840226 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840238 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840361 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840376 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840637 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" containerName="installer" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840662 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840722 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840740 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840808 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840833 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.842101 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.846845 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7r96m"] Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.847742 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.848062 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.848393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.848667 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.853651 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.859752 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrd55\" (UniqueName: \"kubernetes.io/projected/cdeaab03-0cb4-484c-be64-2a535c7ab318-kube-api-access-vrd55\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.859839 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.859875 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.913414 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.960660 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrd55\" (UniqueName: \"kubernetes.io/projected/cdeaab03-0cb4-484c-be64-2a535c7ab318-kube-api-access-vrd55\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.960705 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.960732 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.961887 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.973462 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.975851 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrd55\" (UniqueName: \"kubernetes.io/projected/cdeaab03-0cb4-484c-be64-2a535c7ab318-kube-api-access-vrd55\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.003912 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.074324 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.153630 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.164349 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.202025 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.212512 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.256816 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.292782 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.301600 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.303887 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.356238 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.398932 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.399635 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.415276 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7r96m"] Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.442667 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.456108 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.476538 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.500227 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.526526 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.671754 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.693488 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.701024 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.806451 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.811236 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.922846 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.041901 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.061706 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.084476 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.092800 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.133793 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.186747 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.215006 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.270953 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.279704 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318135 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7r96m_cdeaab03-0cb4-484c-be64-2a535c7ab318/marketplace-operator/0.log" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318357 4942 generic.go:334] "Generic (PLEG): container finished" podID="cdeaab03-0cb4-484c-be64-2a535c7ab318" containerID="63d25efdcea7d65d362ccb8f142ebfdb9fcd3359c017c79f942e1e8d9cb6e32c" exitCode=1 Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318415 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" event={"ID":"cdeaab03-0cb4-484c-be64-2a535c7ab318","Type":"ContainerDied","Data":"63d25efdcea7d65d362ccb8f142ebfdb9fcd3359c017c79f942e1e8d9cb6e32c"} Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318468 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" event={"ID":"cdeaab03-0cb4-484c-be64-2a535c7ab318","Type":"ContainerStarted","Data":"5f98921b4033d3ff95b638a7df72e43ca5834c8570bc040d8779d0fc53704c3d"} Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318786 4942 scope.go:117] "RemoveContainer" containerID="63d25efdcea7d65d362ccb8f142ebfdb9fcd3359c017c79f942e1e8d9cb6e32c" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.378482 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.392277 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.403862 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.432227 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.473503 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.476641 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.485832 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.521423 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.624144 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.792147 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.799631 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.825697 4942 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.826136 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://17d52aa652e2262a448752f8eeedf1ade032558596806a1871b71588f0f54812" gracePeriod=5 Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.843160 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.940226 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.962037 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.006700 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.048998 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.060943 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.107585 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.234360 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.280221 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.291248 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.488502 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.491405 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.621270 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.673500 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.746689 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.781302 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.809376 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.894311 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.926497 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.962125 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.974614 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.139114 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.144463 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.145833 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.330683 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7r96m_cdeaab03-0cb4-484c-be64-2a535c7ab318/marketplace-operator/0.log" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.330753 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" event={"ID":"cdeaab03-0cb4-484c-be64-2a535c7ab318","Type":"ContainerStarted","Data":"77746b1ab73e5d1eac9b86c5bc420f04ef1fbd893259fe9fa7afe46382e72ea4"} Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.331384 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.333133 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.351374 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" podStartSLOduration=5.351356702 podStartE2EDuration="5.351356702s" podCreationTimestamp="2026-02-18 19:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:01.348276864 +0000 UTC m=+281.053209529" watchObservedRunningTime="2026-02-18 19:22:01.351356702 +0000 UTC m=+281.056289367" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.405484 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.460237 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.766775 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.792393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.964319 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.000805 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.015381 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.157450 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.457563 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.560899 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.664878 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.739177 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.785935 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.066355 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.146400 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.158155 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.299877 4942 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.338510 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.383678 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.449787 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.789311 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:22:04 crc kubenswrapper[4942]: I0218 19:22:04.185873 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.357409 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.357739 4942 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="17d52aa652e2262a448752f8eeedf1ade032558596806a1871b71588f0f54812" exitCode=137 Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.410214 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.410297 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488150 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488199 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488227 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488268 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488292 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488321 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488356 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488330 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488457 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488504 4942 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488515 4942 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488523 4942 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.496072 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.589807 4942 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.589848 4942 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:06 crc kubenswrapper[4942]: I0218 19:22:06.363841 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:22:06 crc kubenswrapper[4942]: I0218 19:22:06.364121 4942 scope.go:117] "RemoveContainer" containerID="17d52aa652e2262a448752f8eeedf1ade032558596806a1871b71588f0f54812" Feb 18 19:22:06 crc kubenswrapper[4942]: I0218 19:22:06.364459 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:07 crc kubenswrapper[4942]: I0218 19:22:07.043991 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 19:22:20 crc kubenswrapper[4942]: I0218 19:22:20.799524 4942 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 19:22:27 crc kubenswrapper[4942]: I0218 19:22:27.983753 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:22:27 crc kubenswrapper[4942]: I0218 19:22:27.984948 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" containerID="cri-o://023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" gracePeriod=30 Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.089548 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.089873 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" containerID="cri-o://04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" gracePeriod=30 Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.326046 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.384089 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.426415 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.426842 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.426908 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427204 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427214 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427500 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427690 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427727 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.428315 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config" (OuterVolumeSpecName: "config") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.431647 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7" (OuterVolumeSpecName: "kube-api-access-2bpp7") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "kube-api-access-2bpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.431841 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520531 4942 generic.go:334] "Generic (PLEG): container finished" podID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerID="023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" exitCode=0 Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520604 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520635 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" event={"ID":"5d6ad520-b407-4b86-867b-9e9658bfa536","Type":"ContainerDied","Data":"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf"} Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520697 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" event={"ID":"5d6ad520-b407-4b86-867b-9e9658bfa536","Type":"ContainerDied","Data":"561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c"} Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520718 4942 scope.go:117] "RemoveContainer" containerID="023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.523218 4942 generic.go:334] "Generic (PLEG): container finished" podID="fa346657-46eb-4817-b206-4c09d46d4a55" containerID="04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" exitCode=0 Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.523338 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" event={"ID":"fa346657-46eb-4817-b206-4c09d46d4a55","Type":"ContainerDied","Data":"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c"} Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.523367 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.523421 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" event={"ID":"fa346657-46eb-4817-b206-4c09d46d4a55","Type":"ContainerDied","Data":"ef127dd826aba726a31acfac09be4ab1cb60219849d22bd68a56ddc0ec361b83"} Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.528507 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") pod \"fa346657-46eb-4817-b206-4c09d46d4a55\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.528612 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") pod \"fa346657-46eb-4817-b206-4c09d46d4a55\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.528667 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") pod \"fa346657-46eb-4817-b206-4c09d46d4a55\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.528840 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") pod \"fa346657-46eb-4817-b206-4c09d46d4a55\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529186 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529221 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529242 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529263 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529659 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca" (OuterVolumeSpecName: "client-ca") pod "fa346657-46eb-4817-b206-4c09d46d4a55" (UID: "fa346657-46eb-4817-b206-4c09d46d4a55"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.530101 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config" (OuterVolumeSpecName: "config") pod "fa346657-46eb-4817-b206-4c09d46d4a55" (UID: "fa346657-46eb-4817-b206-4c09d46d4a55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.534472 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fa346657-46eb-4817-b206-4c09d46d4a55" (UID: "fa346657-46eb-4817-b206-4c09d46d4a55"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.536083 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc" (OuterVolumeSpecName: "kube-api-access-mddqc") pod "fa346657-46eb-4817-b206-4c09d46d4a55" (UID: "fa346657-46eb-4817-b206-4c09d46d4a55"). InnerVolumeSpecName "kube-api-access-mddqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.549888 4942 scope.go:117] "RemoveContainer" containerID="023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" Feb 18 19:22:28 crc kubenswrapper[4942]: E0218 19:22:28.550814 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf\": container with ID starting with 023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf not found: ID does not exist" containerID="023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.550961 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf"} err="failed to get container status \"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf\": rpc error: code = NotFound desc = could not find container \"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf\": container with ID starting with 023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf not found: ID does not exist" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.550997 4942 scope.go:117] "RemoveContainer" containerID="04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.572026 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.576743 4942 scope.go:117] "RemoveContainer" containerID="04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" Feb 18 19:22:28 crc kubenswrapper[4942]: E0218 19:22:28.577449 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c\": container with ID starting with 04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c not found: ID does not exist" containerID="04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.577533 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c"} err="failed to get container status \"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c\": rpc error: code = NotFound desc = could not find container \"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c\": container with ID starting with 04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c not found: ID does not exist" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.581063 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.630964 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.631034 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.631059 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.631085 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.866064 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.869428 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.048985 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" path="/var/lib/kubelet/pods/5d6ad520-b407-4b86-867b-9e9658bfa536/volumes" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.050216 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" path="/var/lib/kubelet/pods/fa346657-46eb-4817-b206-4c09d46d4a55/volumes" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083637 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:29 crc kubenswrapper[4942]: E0218 19:22:29.083841 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083854 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: E0218 19:22:29.083867 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083873 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: E0218 19:22:29.083887 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083894 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083984 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083994 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.084003 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.084355 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.087351 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.089046 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.089091 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.089219 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.090070 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.092543 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.100712 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.105271 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.239921 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.240139 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.240418 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.240523 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.240750 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.342832 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.342955 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.343021 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.343082 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.343123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.345347 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.345596 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.345858 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.351282 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.373508 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.448716 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.659522 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.080739 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.081326 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.083577 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.083860 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.084292 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.084513 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.084532 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.084549 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.096528 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.255211 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.255287 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.255320 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.255341 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.356807 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.357342 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.357617 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.357926 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.359656 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.360310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.362782 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.373213 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.399250 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.539052 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" event={"ID":"bf3631bc-384b-44bf-a012-7a1ab90ceb0e","Type":"ContainerStarted","Data":"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de"} Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.539133 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" event={"ID":"bf3631bc-384b-44bf-a012-7a1ab90ceb0e","Type":"ContainerStarted","Data":"ac5bf5f33c7a2c2e5df869efaf323352918861e3a5e68c61cf0a32573f034d12"} Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.539387 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.546382 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.555531 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" podStartSLOduration=2.555517724 podStartE2EDuration="2.555517724s" podCreationTimestamp="2026-02-18 19:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:30.554193296 +0000 UTC m=+310.259125961" watchObservedRunningTime="2026-02-18 19:22:30.555517724 +0000 UTC m=+310.260450389" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.846636 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:30 crc kubenswrapper[4942]: W0218 19:22:30.854441 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35947dc4_201a_4fbd_9c5c_9b0766d22557.slice/crio-79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08 WatchSource:0}: Error finding container 79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08: Status 404 returned error can't find the container with id 79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08 Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.545312 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" event={"ID":"35947dc4-201a-4fbd-9c5c-9b0766d22557","Type":"ContainerStarted","Data":"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263"} Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.545373 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" event={"ID":"35947dc4-201a-4fbd-9c5c-9b0766d22557","Type":"ContainerStarted","Data":"79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08"} Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.545727 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.552369 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.593449 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" podStartSLOduration=3.593428555 podStartE2EDuration="3.593428555s" podCreationTimestamp="2026-02-18 19:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:31.572215794 +0000 UTC m=+311.277148529" watchObservedRunningTime="2026-02-18 19:22:31.593428555 +0000 UTC m=+311.298361250" Feb 18 19:22:40 crc kubenswrapper[4942]: I0218 19:22:40.945715 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:40 crc kubenswrapper[4942]: I0218 19:22:40.946478 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerName="controller-manager" containerID="cri-o://915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" gracePeriod=30 Feb 18 19:22:40 crc kubenswrapper[4942]: I0218 19:22:40.956997 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:40 crc kubenswrapper[4942]: I0218 19:22:40.957190 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerName="route-controller-manager" containerID="cri-o://e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" gracePeriod=30 Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.424016 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.512303 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") pod \"35947dc4-201a-4fbd-9c5c-9b0766d22557\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.512411 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") pod \"35947dc4-201a-4fbd-9c5c-9b0766d22557\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.512493 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") pod \"35947dc4-201a-4fbd-9c5c-9b0766d22557\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.512534 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") pod \"35947dc4-201a-4fbd-9c5c-9b0766d22557\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.513105 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca" (OuterVolumeSpecName: "client-ca") pod "35947dc4-201a-4fbd-9c5c-9b0766d22557" (UID: "35947dc4-201a-4fbd-9c5c-9b0766d22557"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.513479 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config" (OuterVolumeSpecName: "config") pod "35947dc4-201a-4fbd-9c5c-9b0766d22557" (UID: "35947dc4-201a-4fbd-9c5c-9b0766d22557"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.515645 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.518589 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "35947dc4-201a-4fbd-9c5c-9b0766d22557" (UID: "35947dc4-201a-4fbd-9c5c-9b0766d22557"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.519152 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq" (OuterVolumeSpecName: "kube-api-access-qsrnq") pod "35947dc4-201a-4fbd-9c5c-9b0766d22557" (UID: "35947dc4-201a-4fbd-9c5c-9b0766d22557"). InnerVolumeSpecName "kube-api-access-qsrnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604355 4942 generic.go:334] "Generic (PLEG): container finished" podID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerID="915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" exitCode=0 Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604399 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" event={"ID":"bf3631bc-384b-44bf-a012-7a1ab90ceb0e","Type":"ContainerDied","Data":"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de"} Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604460 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" event={"ID":"bf3631bc-384b-44bf-a012-7a1ab90ceb0e","Type":"ContainerDied","Data":"ac5bf5f33c7a2c2e5df869efaf323352918861e3a5e68c61cf0a32573f034d12"} Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604486 4942 scope.go:117] "RemoveContainer" containerID="915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604548 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.606080 4942 generic.go:334] "Generic (PLEG): container finished" podID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerID="e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" exitCode=0 Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.606129 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" event={"ID":"35947dc4-201a-4fbd-9c5c-9b0766d22557","Type":"ContainerDied","Data":"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263"} Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.606164 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" event={"ID":"35947dc4-201a-4fbd-9c5c-9b0766d22557","Type":"ContainerDied","Data":"79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08"} Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.606211 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613659 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613756 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613856 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613928 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613966 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.614210 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.614230 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.614239 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.614250 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.615023 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.615161 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config" (OuterVolumeSpecName: "config") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.615520 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.619904 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.619943 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk" (OuterVolumeSpecName: "kube-api-access-k6bnk") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "kube-api-access-k6bnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.629192 4942 scope.go:117] "RemoveContainer" containerID="915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" Feb 18 19:22:41 crc kubenswrapper[4942]: E0218 19:22:41.630931 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de\": container with ID starting with 915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de not found: ID does not exist" containerID="915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.630989 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de"} err="failed to get container status \"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de\": rpc error: code = NotFound desc = could not find container \"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de\": container with ID starting with 915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de not found: ID does not exist" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.631024 4942 scope.go:117] "RemoveContainer" containerID="e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.632963 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.635630 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.651315 4942 scope.go:117] "RemoveContainer" containerID="e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" Feb 18 19:22:41 crc kubenswrapper[4942]: E0218 19:22:41.651816 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263\": container with ID starting with e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263 not found: ID does not exist" containerID="e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.651848 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263"} err="failed to get container status \"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263\": rpc error: code = NotFound desc = could not find container \"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263\": container with ID starting with e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263 not found: ID does not exist" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714682 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714722 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714734 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714748 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714800 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.955000 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.960859 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.097363 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:22:42 crc kubenswrapper[4942]: E0218 19:22:42.097681 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerName="controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.097965 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerName="controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: E0218 19:22:42.098012 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerName="route-controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.098028 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerName="route-controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.099795 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerName="controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.099857 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerName="route-controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.100611 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106168 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106386 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106471 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106661 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106964 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.107908 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.111524 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.119365 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.123930 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.124137 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.124203 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.124274 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.124359 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.225915 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.226012 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.226086 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.226149 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.226218 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.227747 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.228239 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.228434 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.233733 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.248222 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.469561 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.719445 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.044987 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" path="/var/lib/kubelet/pods/35947dc4-201a-4fbd-9c5c-9b0766d22557/volumes" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.046498 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" path="/var/lib/kubelet/pods/bf3631bc-384b-44bf-a012-7a1ab90ceb0e/volumes" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.091047 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.091973 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.093542 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.093822 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.094156 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.094369 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.094382 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.096102 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.103198 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.148267 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.148326 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.148371 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.148446 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.249298 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.249349 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.249367 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.249397 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.250548 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.251571 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.255535 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.271830 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.449498 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.622868 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" event={"ID":"1f32533e-b907-4ba0-a54f-df71a6863c6d","Type":"ContainerStarted","Data":"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d"} Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.622911 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" event={"ID":"1f32533e-b907-4ba0-a54f-df71a6863c6d","Type":"ContainerStarted","Data":"c8548dc0e24ec36f22e8ba06bf062da1a1bdfa4aa5ed316e0b965a17574740e1"} Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.623245 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.629614 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.643656 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.645092 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" podStartSLOduration=2.645079757 podStartE2EDuration="2.645079757s" podCreationTimestamp="2026-02-18 19:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:43.642139554 +0000 UTC m=+323.347072239" watchObservedRunningTime="2026-02-18 19:22:43.645079757 +0000 UTC m=+323.350012422" Feb 18 19:22:43 crc kubenswrapper[4942]: W0218 19:22:43.649971 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0c3cea3_65a4_46fc_9185_d057169b4174.slice/crio-714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0 WatchSource:0}: Error finding container 714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0: Status 404 returned error can't find the container with id 714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0 Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.629273 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" event={"ID":"b0c3cea3-65a4-46fc-9185-d057169b4174","Type":"ContainerStarted","Data":"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d"} Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.629614 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" event={"ID":"b0c3cea3-65a4-46fc-9185-d057169b4174","Type":"ContainerStarted","Data":"714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0"} Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.629628 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.637850 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.656281 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" podStartSLOduration=3.656261662 podStartE2EDuration="3.656261662s" podCreationTimestamp="2026-02-18 19:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:44.650738786 +0000 UTC m=+324.355671461" watchObservedRunningTime="2026-02-18 19:22:44.656261662 +0000 UTC m=+324.361194317" Feb 18 19:23:07 crc kubenswrapper[4942]: I0218 19:23:07.955360 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:23:07 crc kubenswrapper[4942]: I0218 19:23:07.958432 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerName="controller-manager" containerID="cri-o://d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" gracePeriod=30 Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.566543 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591163 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591310 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591373 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591420 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591470 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.592170 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.592451 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.592663 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config" (OuterVolumeSpecName: "config") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.598266 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.606108 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq" (OuterVolumeSpecName: "kube-api-access-584kq") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "kube-api-access-584kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693280 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693316 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693328 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693340 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693353 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774646 4942 generic.go:334] "Generic (PLEG): container finished" podID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerID="d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" exitCode=0 Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774731 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" event={"ID":"1f32533e-b907-4ba0-a54f-df71a6863c6d","Type":"ContainerDied","Data":"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d"} Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774823 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" event={"ID":"1f32533e-b907-4ba0-a54f-df71a6863c6d","Type":"ContainerDied","Data":"c8548dc0e24ec36f22e8ba06bf062da1a1bdfa4aa5ed316e0b965a17574740e1"} Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774859 4942 scope.go:117] "RemoveContainer" containerID="d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774745 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.804402 4942 scope.go:117] "RemoveContainer" containerID="d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" Feb 18 19:23:08 crc kubenswrapper[4942]: E0218 19:23:08.805205 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d\": container with ID starting with d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d not found: ID does not exist" containerID="d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.805274 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d"} err="failed to get container status \"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d\": rpc error: code = NotFound desc = could not find container \"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d\": container with ID starting with d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d not found: ID does not exist" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.813265 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.819305 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.044462 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" path="/var/lib/kubelet/pods/1f32533e-b907-4ba0-a54f-df71a6863c6d/volumes" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.112835 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bd7dd95df-8ddw9"] Feb 18 19:23:09 crc kubenswrapper[4942]: E0218 19:23:09.113070 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerName="controller-manager" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.113085 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerName="controller-manager" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.113197 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerName="controller-manager" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.113616 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.116343 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.117033 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.117366 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.117651 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.118993 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.121220 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.126941 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.138541 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd7dd95df-8ddw9"] Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.199546 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9qpx\" (UniqueName: \"kubernetes.io/projected/fb4e19d0-ff6c-45f9-872e-750bb8231014-kube-api-access-m9qpx\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.200028 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4e19d0-ff6c-45f9-872e-750bb8231014-serving-cert\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.200140 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-config\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.200315 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-client-ca\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.200492 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-proxy-ca-bundles\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302232 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4e19d0-ff6c-45f9-872e-750bb8231014-serving-cert\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302352 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-config\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302405 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-client-ca\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-proxy-ca-bundles\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302555 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9qpx\" (UniqueName: \"kubernetes.io/projected/fb4e19d0-ff6c-45f9-872e-750bb8231014-kube-api-access-m9qpx\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.304387 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-client-ca\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.305241 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-config\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.305623 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-proxy-ca-bundles\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.311890 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4e19d0-ff6c-45f9-872e-750bb8231014-serving-cert\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.334471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9qpx\" (UniqueName: \"kubernetes.io/projected/fb4e19d0-ff6c-45f9-872e-750bb8231014-kube-api-access-m9qpx\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.428518 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.714289 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd7dd95df-8ddw9"] Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.784133 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" event={"ID":"fb4e19d0-ff6c-45f9-872e-750bb8231014","Type":"ContainerStarted","Data":"afd90d5b443ea5cf5cebd07e2c8014114986c27f1381efaec4e0a06ed2585461"} Feb 18 19:23:10 crc kubenswrapper[4942]: I0218 19:23:10.792076 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" event={"ID":"fb4e19d0-ff6c-45f9-872e-750bb8231014","Type":"ContainerStarted","Data":"0c40db728dcdbd061b234305e3ad34d84b236777107413954e1184878bc6241d"} Feb 18 19:23:10 crc kubenswrapper[4942]: I0218 19:23:10.792404 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:10 crc kubenswrapper[4942]: I0218 19:23:10.798685 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:10 crc kubenswrapper[4942]: I0218 19:23:10.822615 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" podStartSLOduration=3.82259369 podStartE2EDuration="3.82259369s" podCreationTimestamp="2026-02-18 19:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:10.818191948 +0000 UTC m=+350.523124613" watchObservedRunningTime="2026-02-18 19:23:10.82259369 +0000 UTC m=+350.527526355" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.753062 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vcns7"] Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.754779 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.756863 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.770977 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcns7"] Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.786987 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-utilities\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.787069 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8js5p\" (UniqueName: \"kubernetes.io/projected/2dd4a143-c433-4caf-9416-c99755ec1bc5-kube-api-access-8js5p\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.787111 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-catalog-content\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.888490 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-utilities\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.888554 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8js5p\" (UniqueName: \"kubernetes.io/projected/2dd4a143-c433-4caf-9416-c99755ec1bc5-kube-api-access-8js5p\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.888579 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-catalog-content\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.889012 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-catalog-content\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.889208 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-utilities\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.915422 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8js5p\" (UniqueName: \"kubernetes.io/projected/2dd4a143-c433-4caf-9416-c99755ec1bc5-kube-api-access-8js5p\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.947076 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gk749"] Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.948107 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.949781 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.956311 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gk749"] Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.989888 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-catalog-content\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.989922 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-utilities\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.989955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrc6\" (UniqueName: \"kubernetes.io/projected/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-kube-api-access-jqrc6\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.091842 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrc6\" (UniqueName: \"kubernetes.io/projected/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-kube-api-access-jqrc6\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.092017 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-utilities\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.092041 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-catalog-content\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.092526 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-catalog-content\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.092751 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-utilities\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.114886 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrc6\" (UniqueName: \"kubernetes.io/projected/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-kube-api-access-jqrc6\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.120516 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.274401 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.526460 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcns7"] Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.693834 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gk749"] Feb 18 19:23:16 crc kubenswrapper[4942]: W0218 19:23:16.746671 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6b8eeb7_c370_4453_9f7e_d98d5ca2dab7.slice/crio-87dc1e96f4bad903f9fd2e3b55783e810cb0121517717454d4caa5df4f2f4427 WatchSource:0}: Error finding container 87dc1e96f4bad903f9fd2e3b55783e810cb0121517717454d4caa5df4f2f4427: Status 404 returned error can't find the container with id 87dc1e96f4bad903f9fd2e3b55783e810cb0121517717454d4caa5df4f2f4427 Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.834505 4942 generic.go:334] "Generic (PLEG): container finished" podID="2dd4a143-c433-4caf-9416-c99755ec1bc5" containerID="2d67488b90e723ad378bc6997e8d0910ea01d0ca368c949de065a80c98891cf7" exitCode=0 Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.834565 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerDied","Data":"2d67488b90e723ad378bc6997e8d0910ea01d0ca368c949de065a80c98891cf7"} Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.834785 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerStarted","Data":"6feac6c48e56392a7b0af37b8f76c48ff3c2d2a71b037956c438730a7ce226ae"} Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.836514 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gk749" event={"ID":"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7","Type":"ContainerStarted","Data":"87dc1e96f4bad903f9fd2e3b55783e810cb0121517717454d4caa5df4f2f4427"} Feb 18 19:23:17 crc kubenswrapper[4942]: I0218 19:23:17.842904 4942 generic.go:334] "Generic (PLEG): container finished" podID="d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7" containerID="58306be1031c1f15f908358aea5f3bec04348aeb4791e18ff870bc3e971b704c" exitCode=0 Feb 18 19:23:17 crc kubenswrapper[4942]: I0218 19:23:17.842992 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gk749" event={"ID":"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7","Type":"ContainerDied","Data":"58306be1031c1f15f908358aea5f3bec04348aeb4791e18ff870bc3e971b704c"} Feb 18 19:23:17 crc kubenswrapper[4942]: I0218 19:23:17.845703 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerStarted","Data":"d46d5d5064d8560819d8f7094daf05dbd4c499c7577a017891ae2ebd861026eb"} Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.153141 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hm9ft"] Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.154329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.156207 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.168360 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hm9ft"] Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.220950 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cgn\" (UniqueName: \"kubernetes.io/projected/e446c051-f451-4260-b0f7-d2b08c7ae991-kube-api-access-66cgn\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.221012 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-catalog-content\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.221045 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-utilities\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.323566 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-catalog-content\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.323650 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-utilities\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.323715 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66cgn\" (UniqueName: \"kubernetes.io/projected/e446c051-f451-4260-b0f7-d2b08c7ae991-kube-api-access-66cgn\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.324517 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-catalog-content\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.324659 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-utilities\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.355086 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66cgn\" (UniqueName: \"kubernetes.io/projected/e446c051-f451-4260-b0f7-d2b08c7ae991-kube-api-access-66cgn\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.357125 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5cvk"] Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.359587 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.361288 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5cvk"] Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.363918 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.425366 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g66n2\" (UniqueName: \"kubernetes.io/projected/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-kube-api-access-g66n2\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.425416 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-catalog-content\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.425468 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-utilities\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.476396 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.526843 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g66n2\" (UniqueName: \"kubernetes.io/projected/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-kube-api-access-g66n2\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.527285 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-catalog-content\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.527670 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-utilities\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.528178 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-catalog-content\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.528596 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-utilities\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.546111 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g66n2\" (UniqueName: \"kubernetes.io/projected/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-kube-api-access-g66n2\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.696383 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.867200 4942 generic.go:334] "Generic (PLEG): container finished" podID="2dd4a143-c433-4caf-9416-c99755ec1bc5" containerID="d46d5d5064d8560819d8f7094daf05dbd4c499c7577a017891ae2ebd861026eb" exitCode=0 Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.867526 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerDied","Data":"d46d5d5064d8560819d8f7094daf05dbd4c499c7577a017891ae2ebd861026eb"} Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.974039 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hm9ft"] Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.141497 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5cvk"] Feb 18 19:23:19 crc kubenswrapper[4942]: W0218 19:23:19.175264 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ab9f5c3_07c4_4635_9c50_a42b85ad0752.slice/crio-724f2f43deed25ba4d56cae3bf0c6c5fe6adc3d9ab675f421897534aa4ee9b4e WatchSource:0}: Error finding container 724f2f43deed25ba4d56cae3bf0c6c5fe6adc3d9ab675f421897534aa4ee9b4e: Status 404 returned error can't find the container with id 724f2f43deed25ba4d56cae3bf0c6c5fe6adc3d9ab675f421897534aa4ee9b4e Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.880872 4942 generic.go:334] "Generic (PLEG): container finished" podID="e446c051-f451-4260-b0f7-d2b08c7ae991" containerID="329f78c5fa69f6508a3a5a0ef3ea0e4eed0a29583b2f63ff497a5196576be246" exitCode=0 Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.880965 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerDied","Data":"329f78c5fa69f6508a3a5a0ef3ea0e4eed0a29583b2f63ff497a5196576be246"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.884162 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerStarted","Data":"60d3f1cc2e867768c6a9d5d89723aae6d1c530f4e516f46e5be89899c1bf7134"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.884577 4942 generic.go:334] "Generic (PLEG): container finished" podID="d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7" containerID="bf9fc8b047455cbf0f22f686ccbfba4718ea0819913e33cdf05f26cdea43b794" exitCode=0 Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.884625 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gk749" event={"ID":"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7","Type":"ContainerDied","Data":"bf9fc8b047455cbf0f22f686ccbfba4718ea0819913e33cdf05f26cdea43b794"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.886226 4942 generic.go:334] "Generic (PLEG): container finished" podID="0ab9f5c3-07c4-4635-9c50-a42b85ad0752" containerID="1b589d633c05a4b5cb0cd7dbfe1e529f5e59b71d54dee0f5a2733d4695b96757" exitCode=0 Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.886305 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5cvk" event={"ID":"0ab9f5c3-07c4-4635-9c50-a42b85ad0752","Type":"ContainerDied","Data":"1b589d633c05a4b5cb0cd7dbfe1e529f5e59b71d54dee0f5a2733d4695b96757"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.886811 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5cvk" event={"ID":"0ab9f5c3-07c4-4635-9c50-a42b85ad0752","Type":"ContainerStarted","Data":"724f2f43deed25ba4d56cae3bf0c6c5fe6adc3d9ab675f421897534aa4ee9b4e"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.889004 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerStarted","Data":"0be7e0660beef651a0ad00dcf4e20adffe239285f7559ab52faaf44cc69dbc0c"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.953585 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vcns7" podStartSLOduration=2.283402345 podStartE2EDuration="4.95356884s" podCreationTimestamp="2026-02-18 19:23:15 +0000 UTC" firstStartedPulling="2026-02-18 19:23:16.836562044 +0000 UTC m=+356.541494709" lastFinishedPulling="2026-02-18 19:23:19.506728539 +0000 UTC m=+359.211661204" observedRunningTime="2026-02-18 19:23:19.951941885 +0000 UTC m=+359.656874570" watchObservedRunningTime="2026-02-18 19:23:19.95356884 +0000 UTC m=+359.658501515" Feb 18 19:23:21 crc kubenswrapper[4942]: I0218 19:23:21.926904 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerStarted","Data":"5e28183415c1de7a5c72a835abeac7915c3a18c21961567a325b259015487417"} Feb 18 19:23:21 crc kubenswrapper[4942]: I0218 19:23:21.933800 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gk749" event={"ID":"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7","Type":"ContainerStarted","Data":"f94c96e115af3fba4734cc5226fdefac5bd50d3a4e09dc7e61125cff9b6763a6"} Feb 18 19:23:21 crc kubenswrapper[4942]: I0218 19:23:21.935948 4942 generic.go:334] "Generic (PLEG): container finished" podID="0ab9f5c3-07c4-4635-9c50-a42b85ad0752" containerID="9522e32c1343be17233dc44774f2cf79d47cf58830e666d6666f69cafacfebe2" exitCode=0 Feb 18 19:23:21 crc kubenswrapper[4942]: I0218 19:23:21.936017 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5cvk" event={"ID":"0ab9f5c3-07c4-4635-9c50-a42b85ad0752","Type":"ContainerDied","Data":"9522e32c1343be17233dc44774f2cf79d47cf58830e666d6666f69cafacfebe2"} Feb 18 19:23:22 crc kubenswrapper[4942]: I0218 19:23:22.942913 4942 generic.go:334] "Generic (PLEG): container finished" podID="e446c051-f451-4260-b0f7-d2b08c7ae991" containerID="5e28183415c1de7a5c72a835abeac7915c3a18c21961567a325b259015487417" exitCode=0 Feb 18 19:23:22 crc kubenswrapper[4942]: I0218 19:23:22.942993 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerDied","Data":"5e28183415c1de7a5c72a835abeac7915c3a18c21961567a325b259015487417"} Feb 18 19:23:22 crc kubenswrapper[4942]: I0218 19:23:22.946828 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5cvk" event={"ID":"0ab9f5c3-07c4-4635-9c50-a42b85ad0752","Type":"ContainerStarted","Data":"ab1e0708e25431cf82c6a75f311eaf07b10b02a62e742fc93b4144890308bbf1"} Feb 18 19:23:22 crc kubenswrapper[4942]: I0218 19:23:22.965859 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gk749" podStartSLOduration=5.5073350229999996 podStartE2EDuration="7.965840904s" podCreationTimestamp="2026-02-18 19:23:15 +0000 UTC" firstStartedPulling="2026-02-18 19:23:17.84482078 +0000 UTC m=+357.549753445" lastFinishedPulling="2026-02-18 19:23:20.303326651 +0000 UTC m=+360.008259326" observedRunningTime="2026-02-18 19:23:22.008302152 +0000 UTC m=+361.713234817" watchObservedRunningTime="2026-02-18 19:23:22.965840904 +0000 UTC m=+362.670773589" Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.741562 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.741971 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.953236 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerStarted","Data":"7a8346f985956c9b038690085d37ebf096c58262f3b6decae2eca3e2cd738fae"} Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.980607 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hm9ft" podStartSLOduration=2.323723777 podStartE2EDuration="5.980592971s" podCreationTimestamp="2026-02-18 19:23:18 +0000 UTC" firstStartedPulling="2026-02-18 19:23:19.883227821 +0000 UTC m=+359.588160486" lastFinishedPulling="2026-02-18 19:23:23.540097015 +0000 UTC m=+363.245029680" observedRunningTime="2026-02-18 19:23:23.978194594 +0000 UTC m=+363.683127269" watchObservedRunningTime="2026-02-18 19:23:23.980592971 +0000 UTC m=+363.685525636" Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.981272 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5cvk" podStartSLOduration=3.548590315 podStartE2EDuration="5.981265879s" podCreationTimestamp="2026-02-18 19:23:18 +0000 UTC" firstStartedPulling="2026-02-18 19:23:19.887027976 +0000 UTC m=+359.591960641" lastFinishedPulling="2026-02-18 19:23:22.31970354 +0000 UTC m=+362.024636205" observedRunningTime="2026-02-18 19:23:22.982400752 +0000 UTC m=+362.687333407" watchObservedRunningTime="2026-02-18 19:23:23.981265879 +0000 UTC m=+363.686198544" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.120948 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.121311 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.162430 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.274529 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.274811 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.321439 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:27 crc kubenswrapper[4942]: I0218 19:23:27.014080 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:27 crc kubenswrapper[4942]: I0218 19:23:27.014140 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.477909 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.478225 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.696750 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.696808 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.759271 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:29 crc kubenswrapper[4942]: I0218 19:23:29.045329 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:29 crc kubenswrapper[4942]: I0218 19:23:29.546663 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hm9ft" podUID="e446c051-f451-4260-b0f7-d2b08c7ae991" containerName="registry-server" probeResult="failure" output=< Feb 18 19:23:29 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:23:29 crc kubenswrapper[4942]: > Feb 18 19:23:38 crc kubenswrapper[4942]: I0218 19:23:38.551027 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:38 crc kubenswrapper[4942]: I0218 19:23:38.623726 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.225658 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fjd9g"] Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.226583 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.249002 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fjd9g"] Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.411910 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-trusted-ca\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.411971 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412009 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412126 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-tls\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412163 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-certificates\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412199 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9rc\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-kube-api-access-5x9rc\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412277 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-bound-sa-token\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412366 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.445359 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514071 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-trusted-ca\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514156 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514245 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-tls\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514277 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-certificates\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514316 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9rc\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-kube-api-access-5x9rc\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-bound-sa-token\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514405 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.518118 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-certificates\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.518361 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-trusted-ca\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.518588 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.520839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.521849 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-tls\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.540474 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-bound-sa-token\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.559720 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9rc\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-kube-api-access-5x9rc\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.843323 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:43 crc kubenswrapper[4942]: I0218 19:23:43.305833 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fjd9g"] Feb 18 19:23:43 crc kubenswrapper[4942]: W0218 19:23:43.316007 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f6f3e4_1894_4fed_8ac9_5eeb9480ce4c.slice/crio-778419abdbfccb980aa4b6219aef6eb29850b3466f060d255b31fe06c9e1c8f2 WatchSource:0}: Error finding container 778419abdbfccb980aa4b6219aef6eb29850b3466f060d255b31fe06c9e1c8f2: Status 404 returned error can't find the container with id 778419abdbfccb980aa4b6219aef6eb29850b3466f060d255b31fe06c9e1c8f2 Feb 18 19:23:44 crc kubenswrapper[4942]: I0218 19:23:44.066007 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" event={"ID":"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c","Type":"ContainerStarted","Data":"f13b79d47ee4fad9b03a0ff0a36b3da5e6e5ed7241c21bfb36f3675f8c81ecba"} Feb 18 19:23:44 crc kubenswrapper[4942]: I0218 19:23:44.066089 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" event={"ID":"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c","Type":"ContainerStarted","Data":"778419abdbfccb980aa4b6219aef6eb29850b3466f060d255b31fe06c9e1c8f2"} Feb 18 19:23:44 crc kubenswrapper[4942]: I0218 19:23:44.066412 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:44 crc kubenswrapper[4942]: I0218 19:23:44.094012 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" podStartSLOduration=2.093976441 podStartE2EDuration="2.093976441s" podCreationTimestamp="2026-02-18 19:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:44.091668517 +0000 UTC m=+383.796601242" watchObservedRunningTime="2026-02-18 19:23:44.093976441 +0000 UTC m=+383.798909146" Feb 18 19:23:47 crc kubenswrapper[4942]: I0218 19:23:47.992360 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:23:47 crc kubenswrapper[4942]: I0218 19:23:47.992785 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerName="route-controller-manager" containerID="cri-o://7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" gracePeriod=30 Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.369578 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.504971 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") pod \"b0c3cea3-65a4-46fc-9185-d057169b4174\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.505101 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") pod \"b0c3cea3-65a4-46fc-9185-d057169b4174\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.505204 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") pod \"b0c3cea3-65a4-46fc-9185-d057169b4174\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.505390 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") pod \"b0c3cea3-65a4-46fc-9185-d057169b4174\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.506741 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0c3cea3-65a4-46fc-9185-d057169b4174" (UID: "b0c3cea3-65a4-46fc-9185-d057169b4174"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.507197 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config" (OuterVolumeSpecName: "config") pod "b0c3cea3-65a4-46fc-9185-d057169b4174" (UID: "b0c3cea3-65a4-46fc-9185-d057169b4174"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.509935 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd" (OuterVolumeSpecName: "kube-api-access-kjdbd") pod "b0c3cea3-65a4-46fc-9185-d057169b4174" (UID: "b0c3cea3-65a4-46fc-9185-d057169b4174"). InnerVolumeSpecName "kube-api-access-kjdbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.514406 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0c3cea3-65a4-46fc-9185-d057169b4174" (UID: "b0c3cea3-65a4-46fc-9185-d057169b4174"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.606828 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.606883 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.606904 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.606921 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094132 4942 generic.go:334] "Generic (PLEG): container finished" podID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerID="7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" exitCode=0 Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094202 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" event={"ID":"b0c3cea3-65a4-46fc-9185-d057169b4174","Type":"ContainerDied","Data":"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d"} Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094242 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" event={"ID":"b0c3cea3-65a4-46fc-9185-d057169b4174","Type":"ContainerDied","Data":"714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0"} Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094265 4942 scope.go:117] "RemoveContainer" containerID="7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094318 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.111151 4942 scope.go:117] "RemoveContainer" containerID="7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" Feb 18 19:23:49 crc kubenswrapper[4942]: E0218 19:23:49.112101 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d\": container with ID starting with 7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d not found: ID does not exist" containerID="7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.112145 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d"} err="failed to get container status \"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d\": rpc error: code = NotFound desc = could not find container \"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d\": container with ID starting with 7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d not found: ID does not exist" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.117948 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.122056 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.139564 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj"] Feb 18 19:23:49 crc kubenswrapper[4942]: E0218 19:23:49.139785 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerName="route-controller-manager" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.139798 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerName="route-controller-manager" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.139894 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerName="route-controller-manager" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.140365 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.142209 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.143354 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.143360 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.143526 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.144844 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.145074 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.150440 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj"] Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.316549 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c290d1c-13cd-4018-a8ca-1f57494eaf51-serving-cert\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.316628 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-client-ca\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.316852 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-config\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.316955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6f5b\" (UniqueName: \"kubernetes.io/projected/2c290d1c-13cd-4018-a8ca-1f57494eaf51-kube-api-access-c6f5b\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.417979 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c290d1c-13cd-4018-a8ca-1f57494eaf51-serving-cert\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.418034 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-client-ca\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.418096 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-config\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.418134 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6f5b\" (UniqueName: \"kubernetes.io/projected/2c290d1c-13cd-4018-a8ca-1f57494eaf51-kube-api-access-c6f5b\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.419932 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-config\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.421716 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-client-ca\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.424526 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c290d1c-13cd-4018-a8ca-1f57494eaf51-serving-cert\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.445577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6f5b\" (UniqueName: \"kubernetes.io/projected/2c290d1c-13cd-4018-a8ca-1f57494eaf51-kube-api-access-c6f5b\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.466447 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.867727 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj"] Feb 18 19:23:50 crc kubenswrapper[4942]: I0218 19:23:50.108719 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" event={"ID":"2c290d1c-13cd-4018-a8ca-1f57494eaf51","Type":"ContainerStarted","Data":"149575ced0e167a0eed5ca21f1b301d58fec0509ff1b889674fa452cc858f349"} Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.060037 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" path="/var/lib/kubelet/pods/b0c3cea3-65a4-46fc-9185-d057169b4174/volumes" Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.121820 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" event={"ID":"2c290d1c-13cd-4018-a8ca-1f57494eaf51","Type":"ContainerStarted","Data":"637e73864c6fa658ca69250ef298149cef6817155631ac96b00f3f7a70395ab3"} Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.122381 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.128648 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.154127 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" podStartSLOduration=3.154095603 podStartE2EDuration="3.154095603s" podCreationTimestamp="2026-02-18 19:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:51.143725835 +0000 UTC m=+390.848658540" watchObservedRunningTime="2026-02-18 19:23:51.154095603 +0000 UTC m=+390.859028308" Feb 18 19:23:53 crc kubenswrapper[4942]: I0218 19:23:53.740910 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:23:53 crc kubenswrapper[4942]: I0218 19:23:53.741325 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:24:02 crc kubenswrapper[4942]: I0218 19:24:02.848751 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:24:02 crc kubenswrapper[4942]: I0218 19:24:02.921040 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.740917 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.741816 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.741891 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.742920 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.743030 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06" gracePeriod=600 Feb 18 19:24:24 crc kubenswrapper[4942]: I0218 19:24:24.372137 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06" exitCode=0 Feb 18 19:24:24 crc kubenswrapper[4942]: I0218 19:24:24.372267 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06"} Feb 18 19:24:24 crc kubenswrapper[4942]: I0218 19:24:24.372493 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101"} Feb 18 19:24:24 crc kubenswrapper[4942]: I0218 19:24:24.372520 4942 scope.go:117] "RemoveContainer" containerID="d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1" Feb 18 19:24:27 crc kubenswrapper[4942]: I0218 19:24:27.968757 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerName="registry" containerID="cri-o://91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59" gracePeriod=30 Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.404965 4942 generic.go:334] "Generic (PLEG): container finished" podID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerID="91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59" exitCode=0 Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.405093 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" event={"ID":"087f0c6b-3e9f-4db4-bbcb-a8075e218219","Type":"ContainerDied","Data":"91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59"} Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.454826 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.614754 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.614824 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.614849 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615014 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615041 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615076 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615133 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615173 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.616033 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.618133 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.623406 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.624755 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.625526 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w" (OuterVolumeSpecName: "kube-api-access-25z4w") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "kube-api-access-25z4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.626009 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.628304 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.640025 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717013 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717077 4942 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717104 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717132 4942 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717266 4942 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717295 4942 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717321 4942 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.415492 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" event={"ID":"087f0c6b-3e9f-4db4-bbcb-a8075e218219","Type":"ContainerDied","Data":"c9af7faf6591829dd44fe7e25f59f09e1004d7cfb6e0f93079ef222657176a3e"} Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.415583 4942 scope.go:117] "RemoveContainer" containerID="91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59" Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.415870 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.446073 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.450479 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:24:31 crc kubenswrapper[4942]: I0218 19:24:31.047983 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" path="/var/lib/kubelet/pods/087f0c6b-3e9f-4db4-bbcb-a8075e218219/volumes" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.769532 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5"] Feb 18 19:26:49 crc kubenswrapper[4942]: E0218 19:26:49.773471 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerName="registry" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.773614 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerName="registry" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.773867 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerName="registry" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.774494 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.776630 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.776788 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.776972 4942 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5ml7p" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.778160 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.805709 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-p9pz8"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.806926 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.808482 4942 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-gzd7w" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.810911 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4fcbs"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.811843 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.815821 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-p9pz8"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.816645 4942 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qsls8" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.821754 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4fcbs"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.919706 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjg72\" (UniqueName: \"kubernetes.io/projected/6e365537-e12c-486a-a7e3-156ecf269ba3-kube-api-access-qjg72\") pod \"cert-manager-webhook-687f57d79b-4fcbs\" (UID: \"6e365537-e12c-486a-a7e3-156ecf269ba3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.919845 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5n97\" (UniqueName: \"kubernetes.io/projected/b67fb0f6-ae10-459f-82eb-516f6837a3c9-kube-api-access-w5n97\") pod \"cert-manager-858654f9db-p9pz8\" (UID: \"b67fb0f6-ae10-459f-82eb-516f6837a3c9\") " pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.919933 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk5hq\" (UniqueName: \"kubernetes.io/projected/2d101833-8f66-4f88-931b-62659bb0b37e-kube-api-access-tk5hq\") pod \"cert-manager-cainjector-cf98fcc89-kdpq5\" (UID: \"2d101833-8f66-4f88-931b-62659bb0b37e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.021742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjg72\" (UniqueName: \"kubernetes.io/projected/6e365537-e12c-486a-a7e3-156ecf269ba3-kube-api-access-qjg72\") pod \"cert-manager-webhook-687f57d79b-4fcbs\" (UID: \"6e365537-e12c-486a-a7e3-156ecf269ba3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.021856 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5n97\" (UniqueName: \"kubernetes.io/projected/b67fb0f6-ae10-459f-82eb-516f6837a3c9-kube-api-access-w5n97\") pod \"cert-manager-858654f9db-p9pz8\" (UID: \"b67fb0f6-ae10-459f-82eb-516f6837a3c9\") " pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.022019 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk5hq\" (UniqueName: \"kubernetes.io/projected/2d101833-8f66-4f88-931b-62659bb0b37e-kube-api-access-tk5hq\") pod \"cert-manager-cainjector-cf98fcc89-kdpq5\" (UID: \"2d101833-8f66-4f88-931b-62659bb0b37e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.041338 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk5hq\" (UniqueName: \"kubernetes.io/projected/2d101833-8f66-4f88-931b-62659bb0b37e-kube-api-access-tk5hq\") pod \"cert-manager-cainjector-cf98fcc89-kdpq5\" (UID: \"2d101833-8f66-4f88-931b-62659bb0b37e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.045697 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjg72\" (UniqueName: \"kubernetes.io/projected/6e365537-e12c-486a-a7e3-156ecf269ba3-kube-api-access-qjg72\") pod \"cert-manager-webhook-687f57d79b-4fcbs\" (UID: \"6e365537-e12c-486a-a7e3-156ecf269ba3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.046178 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5n97\" (UniqueName: \"kubernetes.io/projected/b67fb0f6-ae10-459f-82eb-516f6837a3c9-kube-api-access-w5n97\") pod \"cert-manager-858654f9db-p9pz8\" (UID: \"b67fb0f6-ae10-459f-82eb-516f6837a3c9\") " pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.103534 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.135469 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.142811 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.370313 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-p9pz8"] Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.375783 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.516353 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5"] Feb 18 19:26:50 crc kubenswrapper[4942]: W0218 19:26:50.611588 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e365537_e12c_486a_a7e3_156ecf269ba3.slice/crio-da27393b001c232a53b492697d913e4c6fc8fb889efc82f647369ae603cb73b7 WatchSource:0}: Error finding container da27393b001c232a53b492697d913e4c6fc8fb889efc82f647369ae603cb73b7: Status 404 returned error can't find the container with id da27393b001c232a53b492697d913e4c6fc8fb889efc82f647369ae603cb73b7 Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.612954 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4fcbs"] Feb 18 19:26:51 crc kubenswrapper[4942]: I0218 19:26:51.377323 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" event={"ID":"2d101833-8f66-4f88-931b-62659bb0b37e","Type":"ContainerStarted","Data":"361703ec57837edec24561eb5f613e5781c6396efa55f3de81407a593207fdd5"} Feb 18 19:26:51 crc kubenswrapper[4942]: I0218 19:26:51.380960 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" event={"ID":"6e365537-e12c-486a-a7e3-156ecf269ba3","Type":"ContainerStarted","Data":"da27393b001c232a53b492697d913e4c6fc8fb889efc82f647369ae603cb73b7"} Feb 18 19:26:51 crc kubenswrapper[4942]: I0218 19:26:51.382956 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-p9pz8" event={"ID":"b67fb0f6-ae10-459f-82eb-516f6837a3c9","Type":"ContainerStarted","Data":"4b886bc8f43ec6f7a4c065210f1025ff4e8cde472330f17e5ed865925c41d46d"} Feb 18 19:26:53 crc kubenswrapper[4942]: I0218 19:26:53.740575 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:26:53 crc kubenswrapper[4942]: I0218 19:26:53.741052 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:26:54 crc kubenswrapper[4942]: I0218 19:26:54.403999 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" event={"ID":"2d101833-8f66-4f88-931b-62659bb0b37e","Type":"ContainerStarted","Data":"3448060ea07c059a677d7cb4dbc687f3bc455914a3038ab2c48fbf5211e5064c"} Feb 18 19:26:54 crc kubenswrapper[4942]: I0218 19:26:54.405852 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-p9pz8" event={"ID":"b67fb0f6-ae10-459f-82eb-516f6837a3c9","Type":"ContainerStarted","Data":"0c3cece2a44cb1606fcea7c8e4f9e8d2a4d8463fceb231d17afeb3f14aaf8bb5"} Feb 18 19:26:54 crc kubenswrapper[4942]: I0218 19:26:54.434434 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" podStartSLOduration=2.718978829 podStartE2EDuration="5.434405704s" podCreationTimestamp="2026-02-18 19:26:49 +0000 UTC" firstStartedPulling="2026-02-18 19:26:50.523625632 +0000 UTC m=+570.228558307" lastFinishedPulling="2026-02-18 19:26:53.239052497 +0000 UTC m=+572.943985182" observedRunningTime="2026-02-18 19:26:54.430268401 +0000 UTC m=+574.135201086" watchObservedRunningTime="2026-02-18 19:26:54.434405704 +0000 UTC m=+574.139338399" Feb 18 19:26:54 crc kubenswrapper[4942]: I0218 19:26:54.468034 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-p9pz8" podStartSLOduration=2.571630873 podStartE2EDuration="5.46801154s" podCreationTimestamp="2026-02-18 19:26:49 +0000 UTC" firstStartedPulling="2026-02-18 19:26:50.374042505 +0000 UTC m=+570.078975180" lastFinishedPulling="2026-02-18 19:26:53.270423182 +0000 UTC m=+572.975355847" observedRunningTime="2026-02-18 19:26:54.466652283 +0000 UTC m=+574.171584988" watchObservedRunningTime="2026-02-18 19:26:54.46801154 +0000 UTC m=+574.172944205" Feb 18 19:26:55 crc kubenswrapper[4942]: I0218 19:26:55.415878 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" event={"ID":"6e365537-e12c-486a-a7e3-156ecf269ba3","Type":"ContainerStarted","Data":"4ffea2f6aed24bc16fc6f5716c235c2e8ed0c9e6d6c462e3a765205da3e13cb1"} Feb 18 19:26:55 crc kubenswrapper[4942]: I0218 19:26:55.416889 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:55 crc kubenswrapper[4942]: I0218 19:26:55.442254 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" podStartSLOduration=2.735627103 podStartE2EDuration="6.442222158s" podCreationTimestamp="2026-02-18 19:26:49 +0000 UTC" firstStartedPulling="2026-02-18 19:26:50.613804541 +0000 UTC m=+570.318737226" lastFinishedPulling="2026-02-18 19:26:54.320399586 +0000 UTC m=+574.025332281" observedRunningTime="2026-02-18 19:26:55.440965214 +0000 UTC m=+575.145897909" watchObservedRunningTime="2026-02-18 19:26:55.442222158 +0000 UTC m=+575.147154863" Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.889409 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89fzv"] Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890199 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-controller" containerID="cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890334 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-node" containerID="cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890308 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="nbdb" containerID="cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890387 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-acl-logging" containerID="cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890343 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890571 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="northd" containerID="cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890727 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="sbdb" containerID="cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.920122 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" containerID="cri-o://7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" gracePeriod=30 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.145471 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.180507 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.182780 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovn-acl-logging/0.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.183239 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovn-controller/0.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.183867 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.251819 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gjbdb"] Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252110 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252130 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252146 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kubecfg-setup" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252157 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kubecfg-setup" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252166 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-node" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252174 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-node" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252189 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-acl-logging" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252196 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-acl-logging" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252209 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252217 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252229 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="nbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252237 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="nbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252247 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252255 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252263 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="sbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252270 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="sbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252283 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="northd" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252290 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="northd" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252300 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252307 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252317 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252324 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252335 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252343 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252447 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252458 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="nbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252467 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252478 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252491 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252502 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-acl-logging" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252516 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="sbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252524 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252532 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="northd" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252543 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252552 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-node" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252661 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252670 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252789 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.254638 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300093 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300145 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300172 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300201 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300221 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300220 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300248 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300282 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300235 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300287 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket" (OuterVolumeSpecName: "log-socket") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300330 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300366 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300396 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300424 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300452 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300365 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300483 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300506 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300529 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300566 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300589 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300617 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300637 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300664 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300377 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300898 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log" (OuterVolumeSpecName: "node-log") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300425 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash" (OuterVolumeSpecName: "host-slash") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300452 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300529 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300547 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300656 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300868 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300878 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300884 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300957 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301012 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301180 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301214 4942 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301238 4942 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301251 4942 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301263 4942 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301274 4942 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301284 4942 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301295 4942 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301309 4942 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301322 4942 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301333 4942 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301345 4942 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.307353 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.308174 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj" (OuterVolumeSpecName: "kube-api-access-cl7tj") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "kube-api-access-cl7tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.317282 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402544 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-systemd-units\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402617 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-env-overrides\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402654 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-node-log\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402703 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-bin\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402896 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402964 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-ovn\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402999 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-systemd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403097 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-var-lib-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403232 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403281 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-etc-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403323 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9574e413-faa5-4a62-a9ef-aaee68989944-ovn-node-metrics-cert\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403436 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-config\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403482 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403572 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-kubelet\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403609 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-slash\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403645 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-netns\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403689 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxwn7\" (UniqueName: \"kubernetes.io/projected/9574e413-faa5-4a62-a9ef-aaee68989944-kube-api-access-sxwn7\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403747 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-log-socket\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403878 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-script-lib\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403919 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-netd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404052 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404086 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404107 4942 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404124 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404143 4942 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404161 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404179 4942 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404198 4942 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404216 4942 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.454420 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.457953 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovn-acl-logging/0.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.458875 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovn-controller/0.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459666 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459720 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459740 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459755 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459790 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459807 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459824 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" exitCode=143 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459840 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" exitCode=143 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459807 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459828 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460003 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460036 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460061 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460083 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460104 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460123 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460142 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460041 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460156 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460257 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460269 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460281 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460331 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460345 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460359 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460377 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460438 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460454 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460466 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460517 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460531 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460544 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460555 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460606 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460625 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460640 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460709 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460741 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460756 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460822 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460835 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460847 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460858 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460909 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460922 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460934 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460945 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461002 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"9d4b5c04c361e209886b1bb004385933e7d66c1477df3ba1ff39b92720286780"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461023 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461037 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461049 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461100 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461112 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461123 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461135 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461184 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461201 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461212 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.464705 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/2.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.465411 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/1.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.465492 4942 generic.go:334] "Generic (PLEG): container finished" podID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" containerID="62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958" exitCode=2 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.465545 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerDied","Data":"62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.465612 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.466359 4942 scope.go:117] "RemoveContainer" containerID="62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.466664 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8jfwb_openshift-multus(75150b8c-7a02-497b-86c3-eabc9c8dbc55)\"" pod="openshift-multus/multus-8jfwb" podUID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.500177 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505408 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-script-lib\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505446 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-netd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505480 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-systemd-units\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505503 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-env-overrides\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505523 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-node-log\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505550 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-bin\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505584 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505608 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-ovn\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505630 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-systemd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505629 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-systemd-units\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505653 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-var-lib-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505666 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-netd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.506912 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505696 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-bin\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505727 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-node-log\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505745 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.506954 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-etc-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505798 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-ovn\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505723 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-systemd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.506983 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9574e413-faa5-4a62-a9ef-aaee68989944-ovn-node-metrics-cert\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507021 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-config\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507025 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-etc-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507038 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507036 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507115 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507162 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-kubelet\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507183 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-slash\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507206 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-netns\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507231 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxwn7\" (UniqueName: \"kubernetes.io/projected/9574e413-faa5-4a62-a9ef-aaee68989944-kube-api-access-sxwn7\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507255 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-log-socket\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-log-socket\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507344 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-kubelet\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507371 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-slash\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507449 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-netns\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505918 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-var-lib-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507710 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-script-lib\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507894 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-config\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.509147 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-env-overrides\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.513956 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9574e413-faa5-4a62-a9ef-aaee68989944-ovn-node-metrics-cert\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.528615 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89fzv"] Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.531450 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxwn7\" (UniqueName: \"kubernetes.io/projected/9574e413-faa5-4a62-a9ef-aaee68989944-kube-api-access-sxwn7\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.533017 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89fzv"] Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.536032 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.550209 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.569480 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.587842 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.592171 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.619047 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.635787 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.655291 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.675706 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.690256 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.691368 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.691405 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.691432 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.691749 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.691804 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} err="failed to get container status \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.691829 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.692274 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.692313 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} err="failed to get container status \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.692347 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.692884 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.692905 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} err="failed to get container status \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.692919 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.693362 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.693416 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} err="failed to get container status \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.693448 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.693928 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.693971 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} err="failed to get container status \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.693993 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.694343 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.694376 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} err="failed to get container status \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.694396 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.694870 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.694901 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} err="failed to get container status \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.694947 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.695258 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.695295 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} err="failed to get container status \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.695319 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.695684 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.695713 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} err="failed to get container status \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.695730 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696100 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696157 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696500 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} err="failed to get container status \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696529 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696870 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} err="failed to get container status \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696902 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.697240 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} err="failed to get container status \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.697287 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.697661 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} err="failed to get container status \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.697685 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698103 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} err="failed to get container status \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698131 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698380 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} err="failed to get container status \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698407 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698885 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} err="failed to get container status \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698918 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699203 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} err="failed to get container status \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699223 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699447 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} err="failed to get container status \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699471 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699888 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699915 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700208 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} err="failed to get container status \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700231 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700515 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} err="failed to get container status \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700543 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700872 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} err="failed to get container status \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700900 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701152 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} err="failed to get container status \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701178 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701563 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} err="failed to get container status \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701584 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701873 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} err="failed to get container status \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701900 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702177 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} err="failed to get container status \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702239 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702517 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} err="failed to get container status \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702538 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702820 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} err="failed to get container status \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702847 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703138 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703161 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703409 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} err="failed to get container status \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703431 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703659 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} err="failed to get container status \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703686 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703999 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} err="failed to get container status \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704022 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704256 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} err="failed to get container status \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704298 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704720 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} err="failed to get container status \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704743 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.705608 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} err="failed to get container status \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.705644 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706010 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} err="failed to get container status \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706035 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706338 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} err="failed to get container status \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706374 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706811 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} err="failed to get container status \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706842 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.707094 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:01 crc kubenswrapper[4942]: I0218 19:27:01.048500 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" path="/var/lib/kubelet/pods/45dc4164-81a9-44cf-b86a-dff571bc0417/volumes" Feb 18 19:27:01 crc kubenswrapper[4942]: I0218 19:27:01.473424 4942 generic.go:334] "Generic (PLEG): container finished" podID="9574e413-faa5-4a62-a9ef-aaee68989944" containerID="b3806cadf6db010b7ff938701ef6c223075e700d63136fe60f4aa5b6ab710c25" exitCode=0 Feb 18 19:27:01 crc kubenswrapper[4942]: I0218 19:27:01.473463 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerDied","Data":"b3806cadf6db010b7ff938701ef6c223075e700d63136fe60f4aa5b6ab710c25"} Feb 18 19:27:01 crc kubenswrapper[4942]: I0218 19:27:01.473514 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"4e65f2f25c26de3fdb063c8a7c04ce58c5c1e39df7b646bca82561106b59cff4"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482250 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"8702d73c36e3d25dc2ecc4611e8459e92dbc20e65cf96f81005b5d0fbd0fa3b2"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482539 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"1fe567c7f5871b8847d931208b1f6d6d85a4716ecde1f739b66cc2fea61ff2a0"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482559 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"eeac1ee4643777c1aa501950bed3477bb3d55b1f9e7699e7c8398406c4034434"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482576 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"d60548efb514441807d9eca0f97e09724ec058e39e5591b232d5fccfedf16463"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482594 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"b5842ac58811154562b4d429af15ff6d1931e52c1eb3efbd6b7bade3e787badd"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482611 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"6e93c6773d7467ab5007080e262b72bbf4b3d35c0af27f60e0c9a8b9e5aff647"} Feb 18 19:27:05 crc kubenswrapper[4942]: I0218 19:27:05.579368 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"72e5fef7206f23ebac783522ff692ffa396621e19f73e573f5a691473bc941ec"} Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.595279 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"3e16529e2f53a255fd6b48abe3f547c6b939e7ab702b745ed2264244bd5959e5"} Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.595661 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.597271 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.597305 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.623814 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.627260 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.629311 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" podStartSLOduration=7.629297102 podStartE2EDuration="7.629297102s" podCreationTimestamp="2026-02-18 19:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:27:07.6246983 +0000 UTC m=+587.329630985" watchObservedRunningTime="2026-02-18 19:27:07.629297102 +0000 UTC m=+587.334229787" Feb 18 19:27:15 crc kubenswrapper[4942]: I0218 19:27:15.036422 4942 scope.go:117] "RemoveContainer" containerID="62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958" Feb 18 19:27:15 crc kubenswrapper[4942]: E0218 19:27:15.037345 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8jfwb_openshift-multus(75150b8c-7a02-497b-86c3-eabc9c8dbc55)\"" pod="openshift-multus/multus-8jfwb" podUID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" Feb 18 19:27:21 crc kubenswrapper[4942]: I0218 19:27:21.296585 4942 scope.go:117] "RemoveContainer" containerID="4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46" Feb 18 19:27:21 crc kubenswrapper[4942]: I0218 19:27:21.691734 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/2.log" Feb 18 19:27:23 crc kubenswrapper[4942]: I0218 19:27:23.741045 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:27:23 crc kubenswrapper[4942]: I0218 19:27:23.741147 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.015718 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc"] Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.017995 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.021729 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.033234 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc"] Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.036607 4942 scope.go:117] "RemoveContainer" containerID="62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.090077 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.090224 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.090305 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.192336 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.192494 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.192600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.193477 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.193505 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.225009 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.384573 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.420419 4942 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(7058b14b21ef01edd614ab28a7b919b5565fa10bd1f21fad4ac4b49b3e621e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.420508 4942 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(7058b14b21ef01edd614ab28a7b919b5565fa10bd1f21fad4ac4b49b3e621e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.420551 4942 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(7058b14b21ef01edd614ab28a7b919b5565fa10bd1f21fad4ac4b49b3e621e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.420627 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace(a2cedb85-fdc1-4d04-b9e2-967d0d2791da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace(a2cedb85-fdc1-4d04-b9e2-967d0d2791da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(7058b14b21ef01edd614ab28a7b919b5565fa10bd1f21fad4ac4b49b3e621e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.734228 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/2.log" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.734370 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.734368 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"68a6bd8e884ce1a855d0edd9eff0fbea8148383a78fc6b30daf35f06965eadbc"} Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.736554 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.777613 4942 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(0e774a7a9d0bf2beee082ec0668f0f5a0f60588e4e59cb2ea604cd14da9a7429): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.777724 4942 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(0e774a7a9d0bf2beee082ec0668f0f5a0f60588e4e59cb2ea604cd14da9a7429): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.777804 4942 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(0e774a7a9d0bf2beee082ec0668f0f5a0f60588e4e59cb2ea604cd14da9a7429): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.777927 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace(a2cedb85-fdc1-4d04-b9e2-967d0d2791da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace(a2cedb85-fdc1-4d04-b9e2-967d0d2791da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(0e774a7a9d0bf2beee082ec0668f0f5a0f60588e4e59cb2ea604cd14da9a7429): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" Feb 18 19:27:30 crc kubenswrapper[4942]: I0218 19:27:30.630826 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.035807 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.041526 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.293351 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc"] Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.836550 4942 generic.go:334] "Generic (PLEG): container finished" podID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerID="f9036c547ff46d448e446680de7f71fc6a8d1a01d85f1b6d0cedaf3c3785e510" exitCode=0 Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.836645 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerDied","Data":"f9036c547ff46d448e446680de7f71fc6a8d1a01d85f1b6d0cedaf3c3785e510"} Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.836910 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerStarted","Data":"ef9f46e2a0b144ea8b66465734389623694b2295e808caaab96975617bc221ba"} Feb 18 19:27:43 crc kubenswrapper[4942]: I0218 19:27:43.857204 4942 generic.go:334] "Generic (PLEG): container finished" podID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerID="3bb342f48838670913250684aad6b73f7799ab3f4a96c8f68276fea888ab361d" exitCode=0 Feb 18 19:27:43 crc kubenswrapper[4942]: I0218 19:27:43.857298 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerDied","Data":"3bb342f48838670913250684aad6b73f7799ab3f4a96c8f68276fea888ab361d"} Feb 18 19:27:44 crc kubenswrapper[4942]: I0218 19:27:44.865787 4942 generic.go:334] "Generic (PLEG): container finished" podID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerID="097d3e24b692e61af5414e5fb41749063e9281c93871177c569f43a4f903f6fd" exitCode=0 Feb 18 19:27:44 crc kubenswrapper[4942]: I0218 19:27:44.865804 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerDied","Data":"097d3e24b692e61af5414e5fb41749063e9281c93871177c569f43a4f903f6fd"} Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.190499 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.346410 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") pod \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.347121 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") pod \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.347171 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") pod \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.351116 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle" (OuterVolumeSpecName: "bundle") pod "a2cedb85-fdc1-4d04-b9e2-967d0d2791da" (UID: "a2cedb85-fdc1-4d04-b9e2-967d0d2791da"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.354128 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp" (OuterVolumeSpecName: "kube-api-access-xdmzp") pod "a2cedb85-fdc1-4d04-b9e2-967d0d2791da" (UID: "a2cedb85-fdc1-4d04-b9e2-967d0d2791da"). InnerVolumeSpecName "kube-api-access-xdmzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.365396 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util" (OuterVolumeSpecName: "util") pod "a2cedb85-fdc1-4d04-b9e2-967d0d2791da" (UID: "a2cedb85-fdc1-4d04-b9e2-967d0d2791da"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.449015 4942 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.449057 4942 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.449069 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.899401 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerDied","Data":"ef9f46e2a0b144ea8b66465734389623694b2295e808caaab96975617bc221ba"} Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.899445 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9f46e2a0b144ea8b66465734389623694b2295e808caaab96975617bc221ba" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.899479 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.741341 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.741915 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.741964 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.742585 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.742655 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101" gracePeriod=600 Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.941366 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101" exitCode=0 Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.941442 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101"} Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.941680 4942 scope.go:117] "RemoveContainer" containerID="cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06" Feb 18 19:27:54 crc kubenswrapper[4942]: I0218 19:27:54.948631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a"} Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023458 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8"] Feb 18 19:27:55 crc kubenswrapper[4942]: E0218 19:27:55.023696 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="extract" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023709 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="extract" Feb 18 19:27:55 crc kubenswrapper[4942]: E0218 19:27:55.023728 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="util" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023735 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="util" Feb 18 19:27:55 crc kubenswrapper[4942]: E0218 19:27:55.023748 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="pull" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023756 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="pull" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023878 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="extract" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.024272 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.025699 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.025964 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4lq4s" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.028870 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.076259 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.080663 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.082977 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.085744 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-flnfb" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.086018 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.090959 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.093957 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.097218 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.124613 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.153429 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96x2f\" (UniqueName: \"kubernetes.io/projected/c22c1602-eed9-45f3-93cf-80a86cad1bab-kube-api-access-96x2f\") pod \"obo-prometheus-operator-68bc856cb9-nh9c8\" (UID: \"c22c1602-eed9-45f3-93cf-80a86cad1bab\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.242811 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-c4t79"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.243966 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.246468 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7nxw2" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.246720 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.254922 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.254975 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.255037 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.255072 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96x2f\" (UniqueName: \"kubernetes.io/projected/c22c1602-eed9-45f3-93cf-80a86cad1bab-kube-api-access-96x2f\") pod \"obo-prometheus-operator-68bc856cb9-nh9c8\" (UID: \"c22c1602-eed9-45f3-93cf-80a86cad1bab\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.255164 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.258399 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-c4t79"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.278055 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96x2f\" (UniqueName: \"kubernetes.io/projected/c22c1602-eed9-45f3-93cf-80a86cad1bab-kube-api-access-96x2f\") pod \"obo-prometheus-operator-68bc856cb9-nh9c8\" (UID: \"c22c1602-eed9-45f3-93cf-80a86cad1bab\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.338670 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357005 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59t2\" (UniqueName: \"kubernetes.io/projected/c013e97b-628a-48b9-9758-3b8c388b8be9-kube-api-access-d59t2\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357073 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c013e97b-628a-48b9-9758-3b8c388b8be9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357120 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357150 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357204 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357239 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.361743 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.361895 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.363139 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.367274 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.404433 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.414484 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.462433 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59t2\" (UniqueName: \"kubernetes.io/projected/c013e97b-628a-48b9-9758-3b8c388b8be9-kube-api-access-d59t2\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.462778 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c013e97b-628a-48b9-9758-3b8c388b8be9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.467931 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c013e97b-628a-48b9-9758-3b8c388b8be9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.492675 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qs7ps"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.493512 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.518734 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59t2\" (UniqueName: \"kubernetes.io/projected/c013e97b-628a-48b9-9758-3b8c388b8be9-kube-api-access-d59t2\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.528079 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-qxhvn" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.542058 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qs7ps"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.560175 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.566319 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94w99\" (UniqueName: \"kubernetes.io/projected/8fc17a06-be09-403e-8923-df71fac9cdfe-kube-api-access-94w99\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.566362 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8fc17a06-be09-403e-8923-df71fac9cdfe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.646616 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.668145 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94w99\" (UniqueName: \"kubernetes.io/projected/8fc17a06-be09-403e-8923-df71fac9cdfe-kube-api-access-94w99\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.668204 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8fc17a06-be09-403e-8923-df71fac9cdfe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.670144 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8fc17a06-be09-403e-8923-df71fac9cdfe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: W0218 19:27:55.674948 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22c1602_eed9_45f3_93cf_80a86cad1bab.slice/crio-cd82067559c38da7d771016dbc8d003ed6c20dd918257b961141dced42d3f006 WatchSource:0}: Error finding container cd82067559c38da7d771016dbc8d003ed6c20dd918257b961141dced42d3f006: Status 404 returned error can't find the container with id cd82067559c38da7d771016dbc8d003ed6c20dd918257b961141dced42d3f006 Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.697211 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94w99\" (UniqueName: \"kubernetes.io/projected/8fc17a06-be09-403e-8923-df71fac9cdfe-kube-api-access-94w99\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.857956 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.882379 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.932057 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-c4t79"] Feb 18 19:27:55 crc kubenswrapper[4942]: W0218 19:27:55.937057 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc013e97b_628a_48b9_9758_3b8c388b8be9.slice/crio-641f7b283402c1b4f1c8f90c8ba02b32cc72cad10611968363b08f3d8a7940b2 WatchSource:0}: Error finding container 641f7b283402c1b4f1c8f90c8ba02b32cc72cad10611968363b08f3d8a7940b2: Status 404 returned error can't find the container with id 641f7b283402c1b4f1c8f90c8ba02b32cc72cad10611968363b08f3d8a7940b2 Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.975155 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" event={"ID":"c013e97b-628a-48b9-9758-3b8c388b8be9","Type":"ContainerStarted","Data":"641f7b283402c1b4f1c8f90c8ba02b32cc72cad10611968363b08f3d8a7940b2"} Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.976135 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" event={"ID":"c22c1602-eed9-45f3-93cf-80a86cad1bab","Type":"ContainerStarted","Data":"cd82067559c38da7d771016dbc8d003ed6c20dd918257b961141dced42d3f006"} Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.977596 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" event={"ID":"61120cf0-34ff-4dbe-9a7a-c94fe6960d34","Type":"ContainerStarted","Data":"df25e3cbe418dcc38a3ca7320c976ab8a35f94640782e5e39bc028bb756bc143"} Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.991884 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk"] Feb 18 19:27:56 crc kubenswrapper[4942]: W0218 19:27:56.005002 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cf13df3_44a2_4895_ac06_37d5eba7767d.slice/crio-89a8c8eb91124245f6d7152593de9f43f4a7b39d32375adf529896468714bbf5 WatchSource:0}: Error finding container 89a8c8eb91124245f6d7152593de9f43f4a7b39d32375adf529896468714bbf5: Status 404 returned error can't find the container with id 89a8c8eb91124245f6d7152593de9f43f4a7b39d32375adf529896468714bbf5 Feb 18 19:27:56 crc kubenswrapper[4942]: I0218 19:27:56.113418 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qs7ps"] Feb 18 19:27:56 crc kubenswrapper[4942]: W0218 19:27:56.122697 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc17a06_be09_403e_8923_df71fac9cdfe.slice/crio-33c118cc62c87837026207f4a577defbffb7a215c4109c290d071b1684100c56 WatchSource:0}: Error finding container 33c118cc62c87837026207f4a577defbffb7a215c4109c290d071b1684100c56: Status 404 returned error can't find the container with id 33c118cc62c87837026207f4a577defbffb7a215c4109c290d071b1684100c56 Feb 18 19:27:56 crc kubenswrapper[4942]: I0218 19:27:56.983294 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" event={"ID":"8fc17a06-be09-403e-8923-df71fac9cdfe","Type":"ContainerStarted","Data":"33c118cc62c87837026207f4a577defbffb7a215c4109c290d071b1684100c56"} Feb 18 19:27:56 crc kubenswrapper[4942]: I0218 19:27:56.984590 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" event={"ID":"4cf13df3-44a2-4895-ac06-37d5eba7767d","Type":"ContainerStarted","Data":"89a8c8eb91124245f6d7152593de9f43f4a7b39d32375adf529896468714bbf5"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.591693 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" event={"ID":"8fc17a06-be09-403e-8923-df71fac9cdfe","Type":"ContainerStarted","Data":"678888a0246e824597c9ddf9dea76c8fe8b8bdd40947dff23fab75196d418c41"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.622595 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.630008 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" event={"ID":"4cf13df3-44a2-4895-ac06-37d5eba7767d","Type":"ContainerStarted","Data":"a5095c5c5df4d4e439e71a7788bdcdb8960f9234d622f27913f2ea4ab15a5077"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.639316 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" event={"ID":"c22c1602-eed9-45f3-93cf-80a86cad1bab","Type":"ContainerStarted","Data":"b919e859fa991b7a897fee4ce0cc9a550c122caae155afaa47a4d1b41bda2b38"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.646738 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" event={"ID":"61120cf0-34ff-4dbe-9a7a-c94fe6960d34","Type":"ContainerStarted","Data":"bc312846055406168f6288fece1f051f1daf157e5ce554dc1354211e400748d2"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.649593 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" event={"ID":"c013e97b-628a-48b9-9758-3b8c388b8be9","Type":"ContainerStarted","Data":"ef35457f469716a899b842dea2fbd203aa16a947aaf54b87b58f68c79f293397"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.650298 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.654487 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.655737 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" podStartSLOduration=2.992199748 podStartE2EDuration="12.655721028s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:56.124030555 +0000 UTC m=+635.828963220" lastFinishedPulling="2026-02-18 19:28:05.787551835 +0000 UTC m=+645.492484500" observedRunningTime="2026-02-18 19:28:07.652698248 +0000 UTC m=+647.357630913" watchObservedRunningTime="2026-02-18 19:28:07.655721028 +0000 UTC m=+647.360653693" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.679488 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" podStartSLOduration=2.796576324 podStartE2EDuration="12.679473289s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:55.939070464 +0000 UTC m=+635.644003129" lastFinishedPulling="2026-02-18 19:28:05.821967429 +0000 UTC m=+645.526900094" observedRunningTime="2026-02-18 19:28:07.678333478 +0000 UTC m=+647.383266143" watchObservedRunningTime="2026-02-18 19:28:07.679473289 +0000 UTC m=+647.384405954" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.706564 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" podStartSLOduration=2.929197195 podStartE2EDuration="12.706530727s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:56.012258577 +0000 UTC m=+635.717191232" lastFinishedPulling="2026-02-18 19:28:05.789592099 +0000 UTC m=+645.494524764" observedRunningTime="2026-02-18 19:28:07.705398497 +0000 UTC m=+647.410331162" watchObservedRunningTime="2026-02-18 19:28:07.706530727 +0000 UTC m=+647.411463392" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.738869 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" podStartSLOduration=2.6068059249999997 podStartE2EDuration="12.738828655s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:55.689958359 +0000 UTC m=+635.394891024" lastFinishedPulling="2026-02-18 19:28:05.821981089 +0000 UTC m=+645.526913754" observedRunningTime="2026-02-18 19:28:07.734837599 +0000 UTC m=+647.439770284" watchObservedRunningTime="2026-02-18 19:28:07.738828655 +0000 UTC m=+647.443761340" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.757723 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" podStartSLOduration=2.875975533 podStartE2EDuration="12.757702706s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:55.904978139 +0000 UTC m=+635.609910804" lastFinishedPulling="2026-02-18 19:28:05.786705312 +0000 UTC m=+645.491637977" observedRunningTime="2026-02-18 19:28:07.756124084 +0000 UTC m=+647.461056759" watchObservedRunningTime="2026-02-18 19:28:07.757702706 +0000 UTC m=+647.462635371" Feb 18 19:28:15 crc kubenswrapper[4942]: I0218 19:28:15.864207 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.771311 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2"] Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.773200 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.784135 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.790899 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2"] Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.889821 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.889901 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.889992 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.991241 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.991287 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.991341 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.991842 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.992013 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.012642 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.102346 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.365148 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2"] Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.801082 4942 generic.go:334] "Generic (PLEG): container finished" podID="13b36241-8d25-425c-a2bb-ad032c01715e" containerID="b9855a89980d529712bdb1ac0219e24a48d2bfa0e5ac826ad414d05d73c7a8bc" exitCode=0 Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.801153 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerDied","Data":"b9855a89980d529712bdb1ac0219e24a48d2bfa0e5ac826ad414d05d73c7a8bc"} Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.801220 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerStarted","Data":"0aa1a26b7b5317dc8c29f64256ca8ed84810bdb3d8227b9965cdf2753bd0ff57"} Feb 18 19:28:34 crc kubenswrapper[4942]: I0218 19:28:34.819285 4942 generic.go:334] "Generic (PLEG): container finished" podID="13b36241-8d25-425c-a2bb-ad032c01715e" containerID="c16d573685b10c337397d32eabd7bb8785cd8921470fd92faebcb8240b0c04c8" exitCode=0 Feb 18 19:28:34 crc kubenswrapper[4942]: I0218 19:28:34.819429 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerDied","Data":"c16d573685b10c337397d32eabd7bb8785cd8921470fd92faebcb8240b0c04c8"} Feb 18 19:28:35 crc kubenswrapper[4942]: I0218 19:28:35.828651 4942 generic.go:334] "Generic (PLEG): container finished" podID="13b36241-8d25-425c-a2bb-ad032c01715e" containerID="440b6fc2ef73ba0831ee5f1ed9047a7827643fed22e83a23678096e76380f922" exitCode=0 Feb 18 19:28:35 crc kubenswrapper[4942]: I0218 19:28:35.828986 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerDied","Data":"440b6fc2ef73ba0831ee5f1ed9047a7827643fed22e83a23678096e76380f922"} Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.202654 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.366065 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") pod \"13b36241-8d25-425c-a2bb-ad032c01715e\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.366230 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") pod \"13b36241-8d25-425c-a2bb-ad032c01715e\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.366285 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") pod \"13b36241-8d25-425c-a2bb-ad032c01715e\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.366798 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle" (OuterVolumeSpecName: "bundle") pod "13b36241-8d25-425c-a2bb-ad032c01715e" (UID: "13b36241-8d25-425c-a2bb-ad032c01715e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.374076 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj" (OuterVolumeSpecName: "kube-api-access-qjxvj") pod "13b36241-8d25-425c-a2bb-ad032c01715e" (UID: "13b36241-8d25-425c-a2bb-ad032c01715e"). InnerVolumeSpecName "kube-api-access-qjxvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.381049 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util" (OuterVolumeSpecName: "util") pod "13b36241-8d25-425c-a2bb-ad032c01715e" (UID: "13b36241-8d25-425c-a2bb-ad032c01715e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.467849 4942 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.468245 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") on node \"crc\" DevicePath \"\"" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.468263 4942 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.843862 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerDied","Data":"0aa1a26b7b5317dc8c29f64256ca8ed84810bdb3d8227b9965cdf2753bd0ff57"} Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.843905 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa1a26b7b5317dc8c29f64256ca8ed84810bdb3d8227b9965cdf2753bd0ff57" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.843986 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.420318 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h4gr7"] Feb 18 19:28:43 crc kubenswrapper[4942]: E0218 19:28:43.420938 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="pull" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.420950 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="pull" Feb 18 19:28:43 crc kubenswrapper[4942]: E0218 19:28:43.420961 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="util" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.420967 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="util" Feb 18 19:28:43 crc kubenswrapper[4942]: E0218 19:28:43.420980 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="extract" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.420986 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="extract" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.421067 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="extract" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.421506 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.423873 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-5swn2" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.423966 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.427457 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.432628 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h4gr7"] Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.443973 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbhs\" (UniqueName: \"kubernetes.io/projected/2f24c234-adb6-4353-94d0-c91f7d538d3d-kube-api-access-mvbhs\") pod \"nmstate-operator-694c9596b7-h4gr7\" (UID: \"2f24c234-adb6-4353-94d0-c91f7d538d3d\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.544957 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbhs\" (UniqueName: \"kubernetes.io/projected/2f24c234-adb6-4353-94d0-c91f7d538d3d-kube-api-access-mvbhs\") pod \"nmstate-operator-694c9596b7-h4gr7\" (UID: \"2f24c234-adb6-4353-94d0-c91f7d538d3d\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.570807 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbhs\" (UniqueName: \"kubernetes.io/projected/2f24c234-adb6-4353-94d0-c91f7d538d3d-kube-api-access-mvbhs\") pod \"nmstate-operator-694c9596b7-h4gr7\" (UID: \"2f24c234-adb6-4353-94d0-c91f7d538d3d\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.744791 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:44 crc kubenswrapper[4942]: I0218 19:28:44.052561 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h4gr7"] Feb 18 19:28:44 crc kubenswrapper[4942]: W0218 19:28:44.055509 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f24c234_adb6_4353_94d0_c91f7d538d3d.slice/crio-bceef4da1e07916b4ede2c4038e4d07623456c4674603dd4e83a76af0f17dc7c WatchSource:0}: Error finding container bceef4da1e07916b4ede2c4038e4d07623456c4674603dd4e83a76af0f17dc7c: Status 404 returned error can't find the container with id bceef4da1e07916b4ede2c4038e4d07623456c4674603dd4e83a76af0f17dc7c Feb 18 19:28:44 crc kubenswrapper[4942]: I0218 19:28:44.910488 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" event={"ID":"2f24c234-adb6-4353-94d0-c91f7d538d3d","Type":"ContainerStarted","Data":"bceef4da1e07916b4ede2c4038e4d07623456c4674603dd4e83a76af0f17dc7c"} Feb 18 19:28:47 crc kubenswrapper[4942]: I0218 19:28:47.929178 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" event={"ID":"2f24c234-adb6-4353-94d0-c91f7d538d3d","Type":"ContainerStarted","Data":"e8f2bc028988ca179eba1ab033011b48819215697390f2d0d534b8e8731572ca"} Feb 18 19:28:47 crc kubenswrapper[4942]: I0218 19:28:47.953637 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" podStartSLOduration=2.115184122 podStartE2EDuration="4.953617204s" podCreationTimestamp="2026-02-18 19:28:43 +0000 UTC" firstStartedPulling="2026-02-18 19:28:44.057297772 +0000 UTC m=+683.762230437" lastFinishedPulling="2026-02-18 19:28:46.895730814 +0000 UTC m=+686.600663519" observedRunningTime="2026-02-18 19:28:47.949785339 +0000 UTC m=+687.654718014" watchObservedRunningTime="2026-02-18 19:28:47.953617204 +0000 UTC m=+687.658549889" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.920455 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-rttzx"] Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.922118 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.924875 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-r5wfx" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.947254 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw"] Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.948395 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.952622 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.977226 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-rttzx"] Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.984906 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-plkfj"] Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.985963 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.994056 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.068054 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xgj\" (UniqueName: \"kubernetes.io/projected/76ecd9a6-426a-4dd2-b701-dc478849bf8c-kube-api-access-45xgj\") pod \"nmstate-metrics-58c85c668d-rttzx\" (UID: \"76ecd9a6-426a-4dd2-b701-dc478849bf8c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.068138 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.068177 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6cm4\" (UniqueName: \"kubernetes.io/projected/5f16510c-481e-41fd-a588-da27d576478c-kube-api-access-d6cm4\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.085969 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.086824 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.089883 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.089936 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fgggf" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.089950 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.103825 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.169311 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdjcw\" (UniqueName: \"kubernetes.io/projected/0069ee73-95fc-4f06-980a-585ed1af868b-kube-api-access-vdjcw\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.169361 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-ovs-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.169435 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xgj\" (UniqueName: \"kubernetes.io/projected/76ecd9a6-426a-4dd2-b701-dc478849bf8c-kube-api-access-45xgj\") pod \"nmstate-metrics-58c85c668d-rttzx\" (UID: \"76ecd9a6-426a-4dd2-b701-dc478849bf8c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.169468 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-dbus-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.170053 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: E0218 19:28:53.170170 4942 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 19:28:53 crc kubenswrapper[4942]: E0218 19:28:53.170241 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair podName:5f16510c-481e-41fd-a588-da27d576478c nodeName:}" failed. No retries permitted until 2026-02-18 19:28:53.670212195 +0000 UTC m=+693.375144850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair") pod "nmstate-webhook-866bcb46dc-bqlvw" (UID: "5f16510c-481e-41fd-a588-da27d576478c") : secret "openshift-nmstate-webhook" not found Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.170321 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6cm4\" (UniqueName: \"kubernetes.io/projected/5f16510c-481e-41fd-a588-da27d576478c-kube-api-access-d6cm4\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.170507 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-nmstate-lock\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.192422 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xgj\" (UniqueName: \"kubernetes.io/projected/76ecd9a6-426a-4dd2-b701-dc478849bf8c-kube-api-access-45xgj\") pod \"nmstate-metrics-58c85c668d-rttzx\" (UID: \"76ecd9a6-426a-4dd2-b701-dc478849bf8c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.205054 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6cm4\" (UniqueName: \"kubernetes.io/projected/5f16510c-481e-41fd-a588-da27d576478c-kube-api-access-d6cm4\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.241837 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272011 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fd2af045-6e0c-43de-8714-f052306c8899-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272068 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-dbus-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272429 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-dbus-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272485 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsmg\" (UniqueName: \"kubernetes.io/projected/fd2af045-6e0c-43de-8714-f052306c8899-kube-api-access-pnsmg\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272514 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd2af045-6e0c-43de-8714-f052306c8899-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-nmstate-lock\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272678 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-nmstate-lock\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272641 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdjcw\" (UniqueName: \"kubernetes.io/projected/0069ee73-95fc-4f06-980a-585ed1af868b-kube-api-access-vdjcw\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.288602 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-ovs-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.288789 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-ovs-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.321572 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8b7698c8d-dspsm"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.323120 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.339159 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8b7698c8d-dspsm"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.340048 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdjcw\" (UniqueName: \"kubernetes.io/projected/0069ee73-95fc-4f06-980a-585ed1af868b-kube-api-access-vdjcw\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.390214 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fd2af045-6e0c-43de-8714-f052306c8899-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.390304 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnsmg\" (UniqueName: \"kubernetes.io/projected/fd2af045-6e0c-43de-8714-f052306c8899-kube-api-access-pnsmg\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.390487 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd2af045-6e0c-43de-8714-f052306c8899-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.391637 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fd2af045-6e0c-43de-8714-f052306c8899-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.396256 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd2af045-6e0c-43de-8714-f052306c8899-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.412031 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnsmg\" (UniqueName: \"kubernetes.io/projected/fd2af045-6e0c-43de-8714-f052306c8899-kube-api-access-pnsmg\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491489 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-trusted-ca-bundle\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491537 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-console-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491559 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-oauth-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwsm2\" (UniqueName: \"kubernetes.io/projected/238ade24-4172-473c-b7e5-c51e7ecce031-kube-api-access-zwsm2\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491804 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-service-ca\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491827 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-oauth-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.492026 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593305 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593375 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-trusted-ca-bundle\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593397 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-console-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593413 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-oauth-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwsm2\" (UniqueName: \"kubernetes.io/projected/238ade24-4172-473c-b7e5-c51e7ecce031-kube-api-access-zwsm2\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593476 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-service-ca\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593493 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-oauth-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.594493 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-console-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.595288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-oauth-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.595305 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-trusted-ca-bundle\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.595312 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-rttzx"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.595414 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-service-ca\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.599032 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-oauth-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.599314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.605518 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.614824 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwsm2\" (UniqueName: \"kubernetes.io/projected/238ade24-4172-473c-b7e5-c51e7ecce031-kube-api-access-zwsm2\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: W0218 19:28:53.625566 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0069ee73_95fc_4f06_980a_585ed1af868b.slice/crio-834bf83da2a9a51758c3d8dedc723d3fc8b39d236b3e0c59e98e95df37d76594 WatchSource:0}: Error finding container 834bf83da2a9a51758c3d8dedc723d3fc8b39d236b3e0c59e98e95df37d76594: Status 404 returned error can't find the container with id 834bf83da2a9a51758c3d8dedc723d3fc8b39d236b3e0c59e98e95df37d76594 Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.659435 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.695099 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.698238 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.700110 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.865411 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.866182 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8b7698c8d-dspsm"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.910097 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl"] Feb 18 19:28:53 crc kubenswrapper[4942]: W0218 19:28:53.915274 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2af045_6e0c_43de_8714_f052306c8899.slice/crio-2436b1aae004bf913cf836ae8657158f8e9a7ab797094b8bdfd2fbb694c29eaf WatchSource:0}: Error finding container 2436b1aae004bf913cf836ae8657158f8e9a7ab797094b8bdfd2fbb694c29eaf: Status 404 returned error can't find the container with id 2436b1aae004bf913cf836ae8657158f8e9a7ab797094b8bdfd2fbb694c29eaf Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.985835 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-plkfj" event={"ID":"0069ee73-95fc-4f06-980a-585ed1af868b","Type":"ContainerStarted","Data":"834bf83da2a9a51758c3d8dedc723d3fc8b39d236b3e0c59e98e95df37d76594"} Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.987781 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8b7698c8d-dspsm" event={"ID":"238ade24-4172-473c-b7e5-c51e7ecce031","Type":"ContainerStarted","Data":"0ccacfcc43c2a6cfb74658e58ef58ab962555a2736895f73d2af1368c89dbcd7"} Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.988746 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" event={"ID":"fd2af045-6e0c-43de-8714-f052306c8899","Type":"ContainerStarted","Data":"2436b1aae004bf913cf836ae8657158f8e9a7ab797094b8bdfd2fbb694c29eaf"} Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.989852 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" event={"ID":"76ecd9a6-426a-4dd2-b701-dc478849bf8c","Type":"ContainerStarted","Data":"6fa5d6079c231fa1550709d168b8ee8e1d57c942b4933df3d72edf9c870e7152"} Feb 18 19:28:54 crc kubenswrapper[4942]: I0218 19:28:54.130167 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw"] Feb 18 19:28:54 crc kubenswrapper[4942]: W0218 19:28:54.135582 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f16510c_481e_41fd_a588_da27d576478c.slice/crio-7be0c00003b55b2b5c5bc5a95a7a95095dc8d13a8bc2a94ceb155474f53862f3 WatchSource:0}: Error finding container 7be0c00003b55b2b5c5bc5a95a7a95095dc8d13a8bc2a94ceb155474f53862f3: Status 404 returned error can't find the container with id 7be0c00003b55b2b5c5bc5a95a7a95095dc8d13a8bc2a94ceb155474f53862f3 Feb 18 19:28:54 crc kubenswrapper[4942]: I0218 19:28:54.997920 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8b7698c8d-dspsm" event={"ID":"238ade24-4172-473c-b7e5-c51e7ecce031","Type":"ContainerStarted","Data":"5d112f5c3c4b1b6ee19325c6fe6b02c146dbfa2faaea1efac98f91ea5ee8f1b3"} Feb 18 19:28:55 crc kubenswrapper[4942]: I0218 19:28:55.002997 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" event={"ID":"5f16510c-481e-41fd-a588-da27d576478c","Type":"ContainerStarted","Data":"7be0c00003b55b2b5c5bc5a95a7a95095dc8d13a8bc2a94ceb155474f53862f3"} Feb 18 19:28:55 crc kubenswrapper[4942]: I0218 19:28:55.022062 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8b7698c8d-dspsm" podStartSLOduration=2.022035699 podStartE2EDuration="2.022035699s" podCreationTimestamp="2026-02-18 19:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:28:55.014653277 +0000 UTC m=+694.719585972" watchObservedRunningTime="2026-02-18 19:28:55.022035699 +0000 UTC m=+694.726968404" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.032523 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-plkfj" event={"ID":"0069ee73-95fc-4f06-980a-585ed1af868b","Type":"ContainerStarted","Data":"cc82c8372ba2ff3eb746eef78db366f9b56189a4d900f3a755c68bff8b3a9ae3"} Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.033251 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.034735 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" event={"ID":"fd2af045-6e0c-43de-8714-f052306c8899","Type":"ContainerStarted","Data":"d165365662a30777e1e51ab8a0636e83fd0404f57429c996a6c1c5303716c2ce"} Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.037272 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" event={"ID":"5f16510c-481e-41fd-a588-da27d576478c","Type":"ContainerStarted","Data":"9f5b68e120fc8ca512adc6d2d5994f9a2ce41a443b782c0fcfb090a39d700a90"} Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.037390 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.038430 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" event={"ID":"76ecd9a6-426a-4dd2-b701-dc478849bf8c","Type":"ContainerStarted","Data":"0db4d45bd4c037d8c179c5a9e9e987896667e0553bf4015a5670fa5c63b63c5e"} Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.060959 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-plkfj" podStartSLOduration=2.841517816 podStartE2EDuration="6.060934646s" podCreationTimestamp="2026-02-18 19:28:52 +0000 UTC" firstStartedPulling="2026-02-18 19:28:53.627325004 +0000 UTC m=+693.332257669" lastFinishedPulling="2026-02-18 19:28:56.846741814 +0000 UTC m=+696.551674499" observedRunningTime="2026-02-18 19:28:58.049822052 +0000 UTC m=+697.754754737" watchObservedRunningTime="2026-02-18 19:28:58.060934646 +0000 UTC m=+697.765867321" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.070679 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" podStartSLOduration=2.166569471 podStartE2EDuration="5.070658007s" podCreationTimestamp="2026-02-18 19:28:53 +0000 UTC" firstStartedPulling="2026-02-18 19:28:53.917630789 +0000 UTC m=+693.622563454" lastFinishedPulling="2026-02-18 19:28:56.821719325 +0000 UTC m=+696.526651990" observedRunningTime="2026-02-18 19:28:58.066176086 +0000 UTC m=+697.771108791" watchObservedRunningTime="2026-02-18 19:28:58.070658007 +0000 UTC m=+697.775590672" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.088878 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" podStartSLOduration=3.38577632 podStartE2EDuration="6.088859127s" podCreationTimestamp="2026-02-18 19:28:52 +0000 UTC" firstStartedPulling="2026-02-18 19:28:54.137441643 +0000 UTC m=+693.842374308" lastFinishedPulling="2026-02-18 19:28:56.84052441 +0000 UTC m=+696.545457115" observedRunningTime="2026-02-18 19:28:58.084887939 +0000 UTC m=+697.789820624" watchObservedRunningTime="2026-02-18 19:28:58.088859127 +0000 UTC m=+697.793791792" Feb 18 19:29:00 crc kubenswrapper[4942]: I0218 19:29:00.062665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" event={"ID":"76ecd9a6-426a-4dd2-b701-dc478849bf8c","Type":"ContainerStarted","Data":"601c55800d31e3074e9adbe935099ddd7251b2b7763286942062bbbfcdc8a012"} Feb 18 19:29:00 crc kubenswrapper[4942]: I0218 19:29:00.095843 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" podStartSLOduration=2.736358299 podStartE2EDuration="8.095814947s" podCreationTimestamp="2026-02-18 19:28:52 +0000 UTC" firstStartedPulling="2026-02-18 19:28:53.616023665 +0000 UTC m=+693.320956330" lastFinishedPulling="2026-02-18 19:28:58.975480313 +0000 UTC m=+698.680412978" observedRunningTime="2026-02-18 19:29:00.090209308 +0000 UTC m=+699.795142003" watchObservedRunningTime="2026-02-18 19:29:00.095814947 +0000 UTC m=+699.800747642" Feb 18 19:29:03 crc kubenswrapper[4942]: I0218 19:29:03.632575 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:29:03 crc kubenswrapper[4942]: I0218 19:29:03.660477 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:29:03 crc kubenswrapper[4942]: I0218 19:29:03.660525 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:29:03 crc kubenswrapper[4942]: I0218 19:29:03.664668 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:29:04 crc kubenswrapper[4942]: I0218 19:29:04.087577 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:29:04 crc kubenswrapper[4942]: I0218 19:29:04.143040 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:29:13 crc kubenswrapper[4942]: I0218 19:29:13.872138 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.339262 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc"] Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.342212 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.345260 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.354682 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc"] Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.453376 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.453579 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.453749 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555251 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555322 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555354 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555879 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555879 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.578630 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.698121 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.113739 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc"] Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.177302 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5l26l" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" containerID="cri-o://49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" gracePeriod=15 Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.271235 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerStarted","Data":"b2a2b13a57b633a3c9df216efc2916398a4a95bc31b1ab284c19a801ebf1cb8e"} Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.271277 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerStarted","Data":"9cf5918a0319959be412172b0bd964ed900873c62b9ea55107030c04fc05324f"} Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.531666 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5l26l_5683bb73-dc7f-40ed-86cd-0c08f2d38147/console/0.log" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.532028 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.668702 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.668825 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.668898 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.668958 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.669013 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.669080 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.669161 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.670474 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.670877 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config" (OuterVolumeSpecName: "console-config") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.670911 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.671344 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca" (OuterVolumeSpecName: "service-ca") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.677679 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.678107 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4" (OuterVolumeSpecName: "kube-api-access-nn6b4") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "kube-api-access-nn6b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.678210 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770455 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770506 4942 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770522 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770538 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770553 4942 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770568 4942 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770618 4942 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280419 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5l26l_5683bb73-dc7f-40ed-86cd-0c08f2d38147/console/0.log" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280482 4942 generic.go:334] "Generic (PLEG): container finished" podID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerID="49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" exitCode=2 Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280553 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5l26l" event={"ID":"5683bb73-dc7f-40ed-86cd-0c08f2d38147","Type":"ContainerDied","Data":"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704"} Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280633 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280863 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5l26l" event={"ID":"5683bb73-dc7f-40ed-86cd-0c08f2d38147","Type":"ContainerDied","Data":"76d66aaf89f1a5aa5957e318124bcfa92f6a6c37df6e5abcffc91fd45db84790"} Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280893 4942 scope.go:117] "RemoveContainer" containerID="49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.282877 4942 generic.go:334] "Generic (PLEG): container finished" podID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerID="b2a2b13a57b633a3c9df216efc2916398a4a95bc31b1ab284c19a801ebf1cb8e" exitCode=0 Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.282901 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerDied","Data":"b2a2b13a57b633a3c9df216efc2916398a4a95bc31b1ab284c19a801ebf1cb8e"} Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.308175 4942 scope.go:117] "RemoveContainer" containerID="49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" Feb 18 19:29:30 crc kubenswrapper[4942]: E0218 19:29:30.308496 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704\": container with ID starting with 49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704 not found: ID does not exist" containerID="49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.308526 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704"} err="failed to get container status \"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704\": rpc error: code = NotFound desc = could not find container \"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704\": container with ID starting with 49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704 not found: ID does not exist" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.327533 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.331103 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:29:31 crc kubenswrapper[4942]: I0218 19:29:31.046259 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" path="/var/lib/kubelet/pods/5683bb73-dc7f-40ed-86cd-0c08f2d38147/volumes" Feb 18 19:29:33 crc kubenswrapper[4942]: I0218 19:29:33.312595 4942 generic.go:334] "Generic (PLEG): container finished" podID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerID="3c528a9e0c17a160edba2f2a8a2b69ef6605da960e82d1cc9013c779b490ba81" exitCode=0 Feb 18 19:29:33 crc kubenswrapper[4942]: I0218 19:29:33.312684 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerDied","Data":"3c528a9e0c17a160edba2f2a8a2b69ef6605da960e82d1cc9013c779b490ba81"} Feb 18 19:29:34 crc kubenswrapper[4942]: I0218 19:29:34.322472 4942 generic.go:334] "Generic (PLEG): container finished" podID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerID="5f1301628b22efbf7e7a27e3b565eeb55a5c343b9627acbcb363cadab023d5dc" exitCode=0 Feb 18 19:29:34 crc kubenswrapper[4942]: I0218 19:29:34.322519 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerDied","Data":"5f1301628b22efbf7e7a27e3b565eeb55a5c343b9627acbcb363cadab023d5dc"} Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.658078 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.751842 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") pod \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.751949 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") pod \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.752107 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") pod \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.753259 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle" (OuterVolumeSpecName: "bundle") pod "aa407b7c-08d9-4762-9aea-25d6aa8e4338" (UID: "aa407b7c-08d9-4762-9aea-25d6aa8e4338"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.757570 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v" (OuterVolumeSpecName: "kube-api-access-d2q9v") pod "aa407b7c-08d9-4762-9aea-25d6aa8e4338" (UID: "aa407b7c-08d9-4762-9aea-25d6aa8e4338"). InnerVolumeSpecName "kube-api-access-d2q9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.776579 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util" (OuterVolumeSpecName: "util") pod "aa407b7c-08d9-4762-9aea-25d6aa8e4338" (UID: "aa407b7c-08d9-4762-9aea-25d6aa8e4338"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.853575 4942 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.853612 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.853629 4942 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:36 crc kubenswrapper[4942]: I0218 19:29:36.339035 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerDied","Data":"9cf5918a0319959be412172b0bd964ed900873c62b9ea55107030c04fc05324f"} Feb 18 19:29:36 crc kubenswrapper[4942]: I0218 19:29:36.339078 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cf5918a0319959be412172b0bd964ed900873c62b9ea55107030c04fc05324f" Feb 18 19:29:36 crc kubenswrapper[4942]: I0218 19:29:36.339141 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.446419 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb"] Feb 18 19:29:49 crc kubenswrapper[4942]: E0218 19:29:49.447025 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="pull" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447038 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="pull" Feb 18 19:29:49 crc kubenswrapper[4942]: E0218 19:29:49.447047 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="util" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447053 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="util" Feb 18 19:29:49 crc kubenswrapper[4942]: E0218 19:29:49.447067 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="extract" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447073 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="extract" Feb 18 19:29:49 crc kubenswrapper[4942]: E0218 19:29:49.447082 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447088 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447182 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="extract" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447194 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447576 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.449831 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.449866 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.450438 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.450622 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xbghr" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.450868 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.459080 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb"] Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.536739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-apiservice-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.536883 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r58m4\" (UniqueName: \"kubernetes.io/projected/b5ebea7c-4a93-46a4-866a-8e00981e0245-kube-api-access-r58m4\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.536907 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-webhook-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.638417 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-apiservice-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.638473 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r58m4\" (UniqueName: \"kubernetes.io/projected/b5ebea7c-4a93-46a4-866a-8e00981e0245-kube-api-access-r58m4\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.638511 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-webhook-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.645566 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-webhook-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.650027 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-apiservice-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.663543 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r58m4\" (UniqueName: \"kubernetes.io/projected/b5ebea7c-4a93-46a4-866a-8e00981e0245-kube-api-access-r58m4\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.673974 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf"] Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.674907 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.676737 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.676837 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.677095 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-k4db2" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.691041 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf"] Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.739481 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hv9\" (UniqueName: \"kubernetes.io/projected/5c90854a-ee13-4493-b4d1-7c891f1eb904-kube-api-access-52hv9\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.739549 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-apiservice-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.739864 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-webhook-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.765328 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.840905 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-apiservice-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.841022 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-webhook-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.841062 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52hv9\" (UniqueName: \"kubernetes.io/projected/5c90854a-ee13-4493-b4d1-7c891f1eb904-kube-api-access-52hv9\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.854520 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-webhook-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.855832 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-apiservice-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.866195 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52hv9\" (UniqueName: \"kubernetes.io/projected/5c90854a-ee13-4493-b4d1-7c891f1eb904-kube-api-access-52hv9\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:50 crc kubenswrapper[4942]: I0218 19:29:50.016298 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:50 crc kubenswrapper[4942]: I0218 19:29:50.064651 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb"] Feb 18 19:29:50 crc kubenswrapper[4942]: W0218 19:29:50.067104 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5ebea7c_4a93_46a4_866a_8e00981e0245.slice/crio-7a98a844c8a865a8ab547447405be51f137b9cfdf20d522c644d4b3db569b13d WatchSource:0}: Error finding container 7a98a844c8a865a8ab547447405be51f137b9cfdf20d522c644d4b3db569b13d: Status 404 returned error can't find the container with id 7a98a844c8a865a8ab547447405be51f137b9cfdf20d522c644d4b3db569b13d Feb 18 19:29:50 crc kubenswrapper[4942]: I0218 19:29:50.417933 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" event={"ID":"b5ebea7c-4a93-46a4-866a-8e00981e0245","Type":"ContainerStarted","Data":"7a98a844c8a865a8ab547447405be51f137b9cfdf20d522c644d4b3db569b13d"} Feb 18 19:29:50 crc kubenswrapper[4942]: I0218 19:29:50.477907 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf"] Feb 18 19:29:50 crc kubenswrapper[4942]: W0218 19:29:50.479153 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c90854a_ee13_4493_b4d1_7c891f1eb904.slice/crio-1609e9707c48d71f6a0118b27e06026b73cec6d29793a8bbf7aebb0e4b8ccc59 WatchSource:0}: Error finding container 1609e9707c48d71f6a0118b27e06026b73cec6d29793a8bbf7aebb0e4b8ccc59: Status 404 returned error can't find the container with id 1609e9707c48d71f6a0118b27e06026b73cec6d29793a8bbf7aebb0e4b8ccc59 Feb 18 19:29:51 crc kubenswrapper[4942]: I0218 19:29:51.423281 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" event={"ID":"5c90854a-ee13-4493-b4d1-7c891f1eb904","Type":"ContainerStarted","Data":"1609e9707c48d71f6a0118b27e06026b73cec6d29793a8bbf7aebb0e4b8ccc59"} Feb 18 19:29:55 crc kubenswrapper[4942]: I0218 19:29:55.409322 4942 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.453193 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" event={"ID":"b5ebea7c-4a93-46a4-866a-8e00981e0245","Type":"ContainerStarted","Data":"6694ca2ca9f79d5f85fea2aba7e2c1aa2a7eb8584d8f17755ab8a68ba13d5b51"} Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.453539 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.455285 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" event={"ID":"5c90854a-ee13-4493-b4d1-7c891f1eb904","Type":"ContainerStarted","Data":"500f83d0c676d2641cb6e778a46b8fdad6058b189d2f661876018118303d06ed"} Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.455467 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.495701 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" podStartSLOduration=2.24724387 podStartE2EDuration="7.495679851s" podCreationTimestamp="2026-02-18 19:29:49 +0000 UTC" firstStartedPulling="2026-02-18 19:29:50.069309076 +0000 UTC m=+749.774241751" lastFinishedPulling="2026-02-18 19:29:55.317745027 +0000 UTC m=+755.022677732" observedRunningTime="2026-02-18 19:29:56.49017152 +0000 UTC m=+756.195104185" watchObservedRunningTime="2026-02-18 19:29:56.495679851 +0000 UTC m=+756.200612526" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.196216 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" podStartSLOduration=5.615165601 podStartE2EDuration="11.196196494s" podCreationTimestamp="2026-02-18 19:29:49 +0000 UTC" firstStartedPulling="2026-02-18 19:29:50.482323469 +0000 UTC m=+750.187256144" lastFinishedPulling="2026-02-18 19:29:56.063354372 +0000 UTC m=+755.768287037" observedRunningTime="2026-02-18 19:29:56.526665719 +0000 UTC m=+756.231598384" watchObservedRunningTime="2026-02-18 19:30:00.196196494 +0000 UTC m=+759.901129169" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.197243 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.198146 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.200001 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.201189 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.208404 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.297828 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.297899 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.298151 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.399600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.399696 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.399784 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.400746 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.411394 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.574504 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.816230 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:01 crc kubenswrapper[4942]: I0218 19:30:01.004107 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 19:30:01 crc kubenswrapper[4942]: I0218 19:30:01.491354 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" event={"ID":"50e6e4f2-9597-4f04-aa2d-d60b56446486","Type":"ContainerStarted","Data":"a8b1969ada1b3f8254fddc0c25babc6706a63d49bbd527b2c0f97f6fdf13b622"} Feb 18 19:30:02 crc kubenswrapper[4942]: I0218 19:30:02.501628 4942 generic.go:334] "Generic (PLEG): container finished" podID="50e6e4f2-9597-4f04-aa2d-d60b56446486" containerID="45f611558efef294793c691f22c0d11c4ce92907ad4ca205006156562d59216c" exitCode=0 Feb 18 19:30:02 crc kubenswrapper[4942]: I0218 19:30:02.501738 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" event={"ID":"50e6e4f2-9597-4f04-aa2d-d60b56446486","Type":"ContainerDied","Data":"45f611558efef294793c691f22c0d11c4ce92907ad4ca205006156562d59216c"} Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.813081 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.944680 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") pod \"50e6e4f2-9597-4f04-aa2d-d60b56446486\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.944834 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") pod \"50e6e4f2-9597-4f04-aa2d-d60b56446486\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.944881 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") pod \"50e6e4f2-9597-4f04-aa2d-d60b56446486\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.946243 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume" (OuterVolumeSpecName: "config-volume") pod "50e6e4f2-9597-4f04-aa2d-d60b56446486" (UID: "50e6e4f2-9597-4f04-aa2d-d60b56446486"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.950440 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "50e6e4f2-9597-4f04-aa2d-d60b56446486" (UID: "50e6e4f2-9597-4f04-aa2d-d60b56446486"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.959954 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5" (OuterVolumeSpecName: "kube-api-access-rpns5") pod "50e6e4f2-9597-4f04-aa2d-d60b56446486" (UID: "50e6e4f2-9597-4f04-aa2d-d60b56446486"). InnerVolumeSpecName "kube-api-access-rpns5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.046623 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.046656 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.046668 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.516419 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" event={"ID":"50e6e4f2-9597-4f04-aa2d-d60b56446486","Type":"ContainerDied","Data":"a8b1969ada1b3f8254fddc0c25babc6706a63d49bbd527b2c0f97f6fdf13b622"} Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.516456 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b1969ada1b3f8254fddc0c25babc6706a63d49bbd527b2c0f97f6fdf13b622" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.516478 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:10 crc kubenswrapper[4942]: I0218 19:30:10.021038 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:30:23 crc kubenswrapper[4942]: I0218 19:30:23.741320 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:30:23 crc kubenswrapper[4942]: I0218 19:30:23.742132 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:30:29 crc kubenswrapper[4942]: I0218 19:30:29.768568 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.629670 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb"] Feb 18 19:30:30 crc kubenswrapper[4942]: E0218 19:30:30.630326 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e6e4f2-9597-4f04-aa2d-d60b56446486" containerName="collect-profiles" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.630360 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e6e4f2-9597-4f04-aa2d-d60b56446486" containerName="collect-profiles" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.630490 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e6e4f2-9597-4f04-aa2d-d60b56446486" containerName="collect-profiles" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.631040 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.637221 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5vknp" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.637240 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.648410 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.664515 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4jkrm"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.666685 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.668652 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.668940 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.753509 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pm8vg"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.758854 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-gzp79"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.759709 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.760259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.761922 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xpfsc" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.762199 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.762522 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.762560 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.762936 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.776430 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-gzp79"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.805726 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-conf\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.805791 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2963214d-df0b-4249-832e-8396a15ed441-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.805815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-reloader\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eddf6439-0868-428b-9bc0-5b85371d6103-metrics-certs\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806179 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-metrics\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806225 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-sockets\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806310 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgwrl\" (UniqueName: \"kubernetes.io/projected/2963214d-df0b-4249-832e-8396a15ed441-kube-api-access-wgwrl\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806348 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx4g5\" (UniqueName: \"kubernetes.io/projected/eddf6439-0868-428b-9bc0-5b85371d6103-kube-api-access-vx4g5\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806372 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eddf6439-0868-428b-9bc0-5b85371d6103-frr-startup\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908008 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metrics-certs\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908054 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qk2\" (UniqueName: \"kubernetes.io/projected/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-kube-api-access-f2qk2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908073 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-metrics-certs\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908090 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-cert\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908114 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx4g5\" (UniqueName: \"kubernetes.io/projected/eddf6439-0868-428b-9bc0-5b85371d6103-kube-api-access-vx4g5\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908133 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908200 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eddf6439-0868-428b-9bc0-5b85371d6103-frr-startup\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908234 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-conf\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908253 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metallb-excludel2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908277 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2963214d-df0b-4249-832e-8396a15ed441-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908293 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-reloader\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908315 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eddf6439-0868-428b-9bc0-5b85371d6103-metrics-certs\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908333 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-metrics\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908349 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-sockets\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908384 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4fhv\" (UniqueName: \"kubernetes.io/projected/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-kube-api-access-r4fhv\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908407 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgwrl\" (UniqueName: \"kubernetes.io/projected/2963214d-df0b-4249-832e-8396a15ed441-kube-api-access-wgwrl\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-conf\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.909332 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eddf6439-0868-428b-9bc0-5b85371d6103-frr-startup\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.909575 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-sockets\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.910229 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-reloader\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.910371 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-metrics\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.914309 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eddf6439-0868-428b-9bc0-5b85371d6103-metrics-certs\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.916815 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2963214d-df0b-4249-832e-8396a15ed441-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.922888 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx4g5\" (UniqueName: \"kubernetes.io/projected/eddf6439-0868-428b-9bc0-5b85371d6103-kube-api-access-vx4g5\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.928302 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgwrl\" (UniqueName: \"kubernetes.io/projected/2963214d-df0b-4249-832e-8396a15ed441-kube-api-access-wgwrl\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.003116 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009563 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metrics-certs\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009631 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2qk2\" (UniqueName: \"kubernetes.io/projected/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-kube-api-access-f2qk2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009682 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-metrics-certs\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009714 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-cert\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009783 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009834 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metallb-excludel2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009962 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4fhv\" (UniqueName: \"kubernetes.io/projected/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-kube-api-access-r4fhv\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: E0218 19:30:31.010006 4942 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:30:31 crc kubenswrapper[4942]: E0218 19:30:31.010103 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist podName:6176b1cc-ddc9-4bdd-9707-ae3c04996b6c nodeName:}" failed. No retries permitted until 2026-02-18 19:30:31.510078867 +0000 UTC m=+791.215011542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist") pod "speaker-pm8vg" (UID: "6176b1cc-ddc9-4bdd-9707-ae3c04996b6c") : secret "metallb-memberlist" not found Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.010483 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.010885 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metallb-excludel2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.012391 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.012564 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metrics-certs\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.022373 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-metrics-certs\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.026137 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-cert\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.026256 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2qk2\" (UniqueName: \"kubernetes.io/projected/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-kube-api-access-f2qk2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.032599 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4fhv\" (UniqueName: \"kubernetes.io/projected/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-kube-api-access-r4fhv\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.076894 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.353154 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-gzp79"] Feb 18 19:30:31 crc kubenswrapper[4942]: W0218 19:30:31.356965 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c6bc24_68cd_4bee_8746_d3cfd2bf97c7.slice/crio-2d44ce364a0306cba64752758aa782bac3237f81a5bd9b164fa2f61490b4c2cb WatchSource:0}: Error finding container 2d44ce364a0306cba64752758aa782bac3237f81a5bd9b164fa2f61490b4c2cb: Status 404 returned error can't find the container with id 2d44ce364a0306cba64752758aa782bac3237f81a5bd9b164fa2f61490b4c2cb Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.463074 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb"] Feb 18 19:30:31 crc kubenswrapper[4942]: W0218 19:30:31.468283 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2963214d_df0b_4249_832e_8396a15ed441.slice/crio-2997c4e70edc02d174d3015fa9ff906e14f93969e6d0d8e48d9ba24ac32f9433 WatchSource:0}: Error finding container 2997c4e70edc02d174d3015fa9ff906e14f93969e6d0d8e48d9ba24ac32f9433: Status 404 returned error can't find the container with id 2997c4e70edc02d174d3015fa9ff906e14f93969e6d0d8e48d9ba24ac32f9433 Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.517613 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: E0218 19:30:31.517741 4942 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:30:31 crc kubenswrapper[4942]: E0218 19:30:31.517831 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist podName:6176b1cc-ddc9-4bdd-9707-ae3c04996b6c nodeName:}" failed. No retries permitted until 2026-02-18 19:30:32.517814144 +0000 UTC m=+792.222746809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist") pod "speaker-pm8vg" (UID: "6176b1cc-ddc9-4bdd-9707-ae3c04996b6c") : secret "metallb-memberlist" not found Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.748652 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" event={"ID":"2963214d-df0b-4249-832e-8396a15ed441","Type":"ContainerStarted","Data":"2997c4e70edc02d174d3015fa9ff906e14f93969e6d0d8e48d9ba24ac32f9433"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.750915 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-gzp79" event={"ID":"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7","Type":"ContainerStarted","Data":"a05e65257db07009159f617a08217d9ea2abf8742a300a1d8be2d852d6a4d7c9"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.750957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-gzp79" event={"ID":"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7","Type":"ContainerStarted","Data":"4ea91c287cbdb76922ecc10664e5b7478349c71e33e651fc26b3d04eb6ca2104"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.751149 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-gzp79" event={"ID":"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7","Type":"ContainerStarted","Data":"2d44ce364a0306cba64752758aa782bac3237f81a5bd9b164fa2f61490b4c2cb"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.751170 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.751958 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"490af75b4c49bef278b67009aeba1191dc6afda18526a7f8efcbec1110ae7761"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.770573 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-gzp79" podStartSLOduration=1.77055242 podStartE2EDuration="1.77055242s" podCreationTimestamp="2026-02-18 19:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:30:31.765725349 +0000 UTC m=+791.470658074" watchObservedRunningTime="2026-02-18 19:30:31.77055242 +0000 UTC m=+791.475485085" Feb 18 19:30:32 crc kubenswrapper[4942]: I0218 19:30:32.539037 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:32 crc kubenswrapper[4942]: I0218 19:30:32.547190 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:32 crc kubenswrapper[4942]: I0218 19:30:32.596073 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pm8vg" Feb 18 19:30:32 crc kubenswrapper[4942]: W0218 19:30:32.635945 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6176b1cc_ddc9_4bdd_9707_ae3c04996b6c.slice/crio-d33300bb504e8e3940697de6eb3f39b22ae6ce2e51f920a3015a463871cf39d2 WatchSource:0}: Error finding container d33300bb504e8e3940697de6eb3f39b22ae6ce2e51f920a3015a463871cf39d2: Status 404 returned error can't find the container with id d33300bb504e8e3940697de6eb3f39b22ae6ce2e51f920a3015a463871cf39d2 Feb 18 19:30:32 crc kubenswrapper[4942]: I0218 19:30:32.780140 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm8vg" event={"ID":"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c","Type":"ContainerStarted","Data":"d33300bb504e8e3940697de6eb3f39b22ae6ce2e51f920a3015a463871cf39d2"} Feb 18 19:30:33 crc kubenswrapper[4942]: I0218 19:30:33.789512 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm8vg" event={"ID":"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c","Type":"ContainerStarted","Data":"e33de20324e503945414965a83aef91476f07dbca665144cf9241c297ac44447"} Feb 18 19:30:33 crc kubenswrapper[4942]: I0218 19:30:33.789552 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm8vg" event={"ID":"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c","Type":"ContainerStarted","Data":"5c7b4fffa33ee6a2c976ae51e80386ff0dcb569d33fafb7a2e26d771e81d5cd7"} Feb 18 19:30:33 crc kubenswrapper[4942]: I0218 19:30:33.789665 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pm8vg" Feb 18 19:30:33 crc kubenswrapper[4942]: I0218 19:30:33.817691 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pm8vg" podStartSLOduration=3.8176779339999998 podStartE2EDuration="3.817677934s" podCreationTimestamp="2026-02-18 19:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:30:33.814217318 +0000 UTC m=+793.519149983" watchObservedRunningTime="2026-02-18 19:30:33.817677934 +0000 UTC m=+793.522610599" Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.833152 4942 generic.go:334] "Generic (PLEG): container finished" podID="eddf6439-0868-428b-9bc0-5b85371d6103" containerID="a3f318eb388ac356022134dc246f5da98fd0b3d94b33bb3683437dc1c2a303b5" exitCode=0 Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.833243 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerDied","Data":"a3f318eb388ac356022134dc246f5da98fd0b3d94b33bb3683437dc1c2a303b5"} Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.836334 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" event={"ID":"2963214d-df0b-4249-832e-8396a15ed441","Type":"ContainerStarted","Data":"371cf545a7ea38f9136c8f015c3b70951e9bd3e49097f90e64801bc4067d1f18"} Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.836744 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.908064 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" podStartSLOduration=2.00864274 podStartE2EDuration="8.908039572s" podCreationTimestamp="2026-02-18 19:30:30 +0000 UTC" firstStartedPulling="2026-02-18 19:30:31.470300077 +0000 UTC m=+791.175232742" lastFinishedPulling="2026-02-18 19:30:38.369696869 +0000 UTC m=+798.074629574" observedRunningTime="2026-02-18 19:30:38.904840552 +0000 UTC m=+798.609773267" watchObservedRunningTime="2026-02-18 19:30:38.908039572 +0000 UTC m=+798.612972257" Feb 18 19:30:39 crc kubenswrapper[4942]: I0218 19:30:39.845527 4942 generic.go:334] "Generic (PLEG): container finished" podID="eddf6439-0868-428b-9bc0-5b85371d6103" containerID="302cb526dd3643bbbd7b7f2cc1e8ac09f60a5a155079b8b66fedee1897dd4fa2" exitCode=0 Feb 18 19:30:39 crc kubenswrapper[4942]: I0218 19:30:39.846077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerDied","Data":"302cb526dd3643bbbd7b7f2cc1e8ac09f60a5a155079b8b66fedee1897dd4fa2"} Feb 18 19:30:40 crc kubenswrapper[4942]: I0218 19:30:40.858663 4942 generic.go:334] "Generic (PLEG): container finished" podID="eddf6439-0868-428b-9bc0-5b85371d6103" containerID="42a262e61b422bd818b4f6a5e771aa968b48b6d351161c933d1149199aa5c10c" exitCode=0 Feb 18 19:30:40 crc kubenswrapper[4942]: I0218 19:30:40.858722 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerDied","Data":"42a262e61b422bd818b4f6a5e771aa968b48b6d351161c933d1149199aa5c10c"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.088528 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871216 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"86097023d27d0341ea77cd48cdbf1e5b391fc69f61b7fe2ebe11564218a632c5"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871506 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"03f33caeacbbaca1b7a95e1fadee0d53533f89baea8a15ca3df499d50ba94747"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871526 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871538 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"b67762593bfa38ab20dbba98a205fb02a1af2a52c922d04397f46e53277ce4fa"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871549 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"31738ac3e00a8360d37f5ea2d06de7aa12a322e2f0ab4ed1260b366b4f1823df"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871558 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"8fe1370375fde5ab82cb849e306c5a55005ecf678b7f98871206d083686119c5"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871570 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"7d9c9792cd5dfdb3e0bfb306d232e27602d9208a0b1d4fbf215965dec13f1bf2"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.896232 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4jkrm" podStartSLOduration=4.771272472 podStartE2EDuration="11.89621418s" podCreationTimestamp="2026-02-18 19:30:30 +0000 UTC" firstStartedPulling="2026-02-18 19:30:31.20308783 +0000 UTC m=+790.908020495" lastFinishedPulling="2026-02-18 19:30:38.328029528 +0000 UTC m=+798.032962203" observedRunningTime="2026-02-18 19:30:41.892672902 +0000 UTC m=+801.597605567" watchObservedRunningTime="2026-02-18 19:30:41.89621418 +0000 UTC m=+801.601146845" Feb 18 19:30:42 crc kubenswrapper[4942]: I0218 19:30:42.600938 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pm8vg" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.446748 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.453573 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.455931 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nqjmc" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.456484 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.456540 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.456617 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.644146 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") pod \"openstack-operator-index-9fdg7\" (UID: \"a991d775-aaf3-4672-a039-e0e212c0be47\") " pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.745424 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") pod \"openstack-operator-index-9fdg7\" (UID: \"a991d775-aaf3-4672-a039-e0e212c0be47\") " pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.780524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") pod \"openstack-operator-index-9fdg7\" (UID: \"a991d775-aaf3-4672-a039-e0e212c0be47\") " pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.010952 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.069591 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.076873 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.552154 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.909947 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fdg7" event={"ID":"a991d775-aaf3-4672-a039-e0e212c0be47","Type":"ContainerStarted","Data":"53714d49461e4e0da0b076abca969cde23b7aeaeda7b3afdc4dfa1f5170c63e5"} Feb 18 19:30:48 crc kubenswrapper[4942]: I0218 19:30:48.798494 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.405530 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kjnfm"] Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.406549 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.429414 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kjnfm"] Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.596230 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ql6\" (UniqueName: \"kubernetes.io/projected/c2a7b573-c260-4ebc-8a90-c935ce2e9b05-kube-api-access-95ql6\") pod \"openstack-operator-index-kjnfm\" (UID: \"c2a7b573-c260-4ebc-8a90-c935ce2e9b05\") " pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.697618 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ql6\" (UniqueName: \"kubernetes.io/projected/c2a7b573-c260-4ebc-8a90-c935ce2e9b05-kube-api-access-95ql6\") pod \"openstack-operator-index-kjnfm\" (UID: \"c2a7b573-c260-4ebc-8a90-c935ce2e9b05\") " pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.730324 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ql6\" (UniqueName: \"kubernetes.io/projected/c2a7b573-c260-4ebc-8a90-c935ce2e9b05-kube-api-access-95ql6\") pod \"openstack-operator-index-kjnfm\" (UID: \"c2a7b573-c260-4ebc-8a90-c935ce2e9b05\") " pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.740849 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.932108 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fdg7" event={"ID":"a991d775-aaf3-4672-a039-e0e212c0be47","Type":"ContainerStarted","Data":"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a"} Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.932457 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9fdg7" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" containerName="registry-server" containerID="cri-o://c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" gracePeriod=2 Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.948642 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9fdg7" podStartSLOduration=2.569181888 podStartE2EDuration="4.948625534s" podCreationTimestamp="2026-02-18 19:30:45 +0000 UTC" firstStartedPulling="2026-02-18 19:30:46.56158293 +0000 UTC m=+806.266515635" lastFinishedPulling="2026-02-18 19:30:48.941026616 +0000 UTC m=+808.645959281" observedRunningTime="2026-02-18 19:30:49.944738257 +0000 UTC m=+809.649670922" watchObservedRunningTime="2026-02-18 19:30:49.948625534 +0000 UTC m=+809.653558199" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.170213 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kjnfm"] Feb 18 19:30:50 crc kubenswrapper[4942]: W0218 19:30:50.183650 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2a7b573_c260_4ebc_8a90_c935ce2e9b05.slice/crio-7753afbf3c777de2999a208e57e6fc75d0e6c30dd7288ba2239c2938ff4af054 WatchSource:0}: Error finding container 7753afbf3c777de2999a208e57e6fc75d0e6c30dd7288ba2239c2938ff4af054: Status 404 returned error can't find the container with id 7753afbf3c777de2999a208e57e6fc75d0e6c30dd7288ba2239c2938ff4af054 Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.291276 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.313149 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") pod \"a991d775-aaf3-4672-a039-e0e212c0be47\" (UID: \"a991d775-aaf3-4672-a039-e0e212c0be47\") " Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.322654 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww" (OuterVolumeSpecName: "kube-api-access-6mtww") pod "a991d775-aaf3-4672-a039-e0e212c0be47" (UID: "a991d775-aaf3-4672-a039-e0e212c0be47"). InnerVolumeSpecName "kube-api-access-6mtww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.416825 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.943701 4942 generic.go:334] "Generic (PLEG): container finished" podID="a991d775-aaf3-4672-a039-e0e212c0be47" containerID="c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" exitCode=0 Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.943785 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.943812 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fdg7" event={"ID":"a991d775-aaf3-4672-a039-e0e212c0be47","Type":"ContainerDied","Data":"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a"} Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.944428 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fdg7" event={"ID":"a991d775-aaf3-4672-a039-e0e212c0be47","Type":"ContainerDied","Data":"53714d49461e4e0da0b076abca969cde23b7aeaeda7b3afdc4dfa1f5170c63e5"} Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.944449 4942 scope.go:117] "RemoveContainer" containerID="c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.948854 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kjnfm" event={"ID":"c2a7b573-c260-4ebc-8a90-c935ce2e9b05","Type":"ContainerStarted","Data":"f325496d6ebbd65e31717afe4b8565caca6e20cc557398f1b43cb36e8ca14c55"} Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.948920 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kjnfm" event={"ID":"c2a7b573-c260-4ebc-8a90-c935ce2e9b05","Type":"ContainerStarted","Data":"7753afbf3c777de2999a208e57e6fc75d0e6c30dd7288ba2239c2938ff4af054"} Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.973789 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kjnfm" podStartSLOduration=1.923243228 podStartE2EDuration="1.973718239s" podCreationTimestamp="2026-02-18 19:30:49 +0000 UTC" firstStartedPulling="2026-02-18 19:30:50.188381715 +0000 UTC m=+809.893314380" lastFinishedPulling="2026-02-18 19:30:50.238856726 +0000 UTC m=+809.943789391" observedRunningTime="2026-02-18 19:30:50.972870987 +0000 UTC m=+810.677803692" watchObservedRunningTime="2026-02-18 19:30:50.973718239 +0000 UTC m=+810.678650954" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.981633 4942 scope.go:117] "RemoveContainer" containerID="c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" Feb 18 19:30:50 crc kubenswrapper[4942]: E0218 19:30:50.982197 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a\": container with ID starting with c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a not found: ID does not exist" containerID="c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.982257 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a"} err="failed to get container status \"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a\": rpc error: code = NotFound desc = could not find container \"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a\": container with ID starting with c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a not found: ID does not exist" Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.000521 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.006747 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.010011 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.015511 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.053821 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" path="/var/lib/kubelet/pods/a991d775-aaf3-4672-a039-e0e212c0be47/volumes" Feb 18 19:30:53 crc kubenswrapper[4942]: I0218 19:30:53.741397 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:30:53 crc kubenswrapper[4942]: I0218 19:30:53.741836 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:30:59 crc kubenswrapper[4942]: I0218 19:30:59.742015 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:59 crc kubenswrapper[4942]: I0218 19:30:59.742343 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:59 crc kubenswrapper[4942]: I0218 19:30:59.782145 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:31:00 crc kubenswrapper[4942]: I0218 19:31:00.060619 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.066162 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln"] Feb 18 19:31:01 crc kubenswrapper[4942]: E0218 19:31:01.066456 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" containerName="registry-server" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.066470 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" containerName="registry-server" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.066593 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" containerName="registry-server" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.067446 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.076250 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln"] Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.077754 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wvrwg" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.185291 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.185720 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.185948 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.286991 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.287099 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.287189 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.287793 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.288018 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.317101 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.394281 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.828386 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln"] Feb 18 19:31:01 crc kubenswrapper[4942]: W0218 19:31:01.841861 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1e1c52_dc07_468c_ad10_e1c39be1a5b5.slice/crio-c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15 WatchSource:0}: Error finding container c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15: Status 404 returned error can't find the container with id c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15 Feb 18 19:31:02 crc kubenswrapper[4942]: I0218 19:31:02.054057 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerStarted","Data":"bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af"} Feb 18 19:31:02 crc kubenswrapper[4942]: I0218 19:31:02.054422 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerStarted","Data":"c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15"} Feb 18 19:31:02 crc kubenswrapper[4942]: E0218 19:31:02.215925 4942 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1e1c52_dc07_468c_ad10_e1c39be1a5b5.slice/crio-bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1e1c52_dc07_468c_ad10_e1c39be1a5b5.slice/crio-conmon-bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:31:03 crc kubenswrapper[4942]: I0218 19:31:03.062976 4942 generic.go:334] "Generic (PLEG): container finished" podID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerID="bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af" exitCode=0 Feb 18 19:31:03 crc kubenswrapper[4942]: I0218 19:31:03.063034 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerDied","Data":"bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af"} Feb 18 19:31:04 crc kubenswrapper[4942]: I0218 19:31:04.075990 4942 generic.go:334] "Generic (PLEG): container finished" podID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerID="8a5286f376506706706bb1b0a1894c49cffb2a9d67abd03ae06a8cd1ec83057e" exitCode=0 Feb 18 19:31:04 crc kubenswrapper[4942]: I0218 19:31:04.076068 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerDied","Data":"8a5286f376506706706bb1b0a1894c49cffb2a9d67abd03ae06a8cd1ec83057e"} Feb 18 19:31:05 crc kubenswrapper[4942]: I0218 19:31:05.091151 4942 generic.go:334] "Generic (PLEG): container finished" podID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerID="7d2535a756fd4b341284d12e18b8c048d1ad768b210a82136b38af44b4e60253" exitCode=0 Feb 18 19:31:05 crc kubenswrapper[4942]: I0218 19:31:05.091190 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerDied","Data":"7d2535a756fd4b341284d12e18b8c048d1ad768b210a82136b38af44b4e60253"} Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.383239 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.460498 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") pod \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.460614 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") pod \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.460663 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") pod \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.461456 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle" (OuterVolumeSpecName: "bundle") pod "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" (UID: "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.465230 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph" (OuterVolumeSpecName: "kube-api-access-pc5ph") pod "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" (UID: "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5"). InnerVolumeSpecName "kube-api-access-pc5ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.482438 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util" (OuterVolumeSpecName: "util") pod "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" (UID: "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.562294 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.562404 4942 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.562427 4942 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:07 crc kubenswrapper[4942]: I0218 19:31:07.109667 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerDied","Data":"c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15"} Feb 18 19:31:07 crc kubenswrapper[4942]: I0218 19:31:07.109740 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15" Feb 18 19:31:07 crc kubenswrapper[4942]: I0218 19:31:07.109863 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.210490 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v"] Feb 18 19:31:13 crc kubenswrapper[4942]: E0218 19:31:13.211319 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="util" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.211336 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="util" Feb 18 19:31:13 crc kubenswrapper[4942]: E0218 19:31:13.211346 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="extract" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.211353 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="extract" Feb 18 19:31:13 crc kubenswrapper[4942]: E0218 19:31:13.211367 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="pull" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.211374 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="pull" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.211515 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="extract" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.212039 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.215089 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-z8plb" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.236818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v"] Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.255364 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjdz\" (UniqueName: \"kubernetes.io/projected/268dc206-1be7-4a8a-8cd7-45b3c667b3bd-kube-api-access-lpjdz\") pod \"openstack-operator-controller-init-fc58468f4-xvr6v\" (UID: \"268dc206-1be7-4a8a-8cd7-45b3c667b3bd\") " pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.356317 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjdz\" (UniqueName: \"kubernetes.io/projected/268dc206-1be7-4a8a-8cd7-45b3c667b3bd-kube-api-access-lpjdz\") pod \"openstack-operator-controller-init-fc58468f4-xvr6v\" (UID: \"268dc206-1be7-4a8a-8cd7-45b3c667b3bd\") " pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.373685 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjdz\" (UniqueName: \"kubernetes.io/projected/268dc206-1be7-4a8a-8cd7-45b3c667b3bd-kube-api-access-lpjdz\") pod \"openstack-operator-controller-init-fc58468f4-xvr6v\" (UID: \"268dc206-1be7-4a8a-8cd7-45b3c667b3bd\") " pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.527989 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.756219 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v"] Feb 18 19:31:14 crc kubenswrapper[4942]: I0218 19:31:14.166754 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" event={"ID":"268dc206-1be7-4a8a-8cd7-45b3c667b3bd","Type":"ContainerStarted","Data":"1d462ee4383d0179f60289a8d57fa47871dad9b6029ef1927d016cbc87a139ae"} Feb 18 19:31:18 crc kubenswrapper[4942]: I0218 19:31:18.198742 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" event={"ID":"268dc206-1be7-4a8a-8cd7-45b3c667b3bd","Type":"ContainerStarted","Data":"774686fb33e15d8649344f37dc7798663513e5d79967448c8dbc91c16bca7f32"} Feb 18 19:31:18 crc kubenswrapper[4942]: I0218 19:31:18.199556 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:18 crc kubenswrapper[4942]: I0218 19:31:18.235642 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" podStartSLOduration=1.469611162 podStartE2EDuration="5.235625427s" podCreationTimestamp="2026-02-18 19:31:13 +0000 UTC" firstStartedPulling="2026-02-18 19:31:13.765475586 +0000 UTC m=+833.470408261" lastFinishedPulling="2026-02-18 19:31:17.531489861 +0000 UTC m=+837.236422526" observedRunningTime="2026-02-18 19:31:18.23017996 +0000 UTC m=+837.935112625" watchObservedRunningTime="2026-02-18 19:31:18.235625427 +0000 UTC m=+837.940558092" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.532640 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.741202 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.741596 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.741655 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.742450 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.742524 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a" gracePeriod=600 Feb 18 19:31:24 crc kubenswrapper[4942]: I0218 19:31:24.247697 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a" exitCode=0 Feb 18 19:31:24 crc kubenswrapper[4942]: I0218 19:31:24.247889 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a"} Feb 18 19:31:24 crc kubenswrapper[4942]: I0218 19:31:24.248082 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b"} Feb 18 19:31:24 crc kubenswrapper[4942]: I0218 19:31:24.248115 4942 scope.go:117] "RemoveContainer" containerID="69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101" Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.856347 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.859730 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.873612 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.961405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.961464 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.961496 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.062474 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.062544 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.062581 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.063122 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.063335 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.085549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.185115 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.709477 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:31:37 crc kubenswrapper[4942]: I0218 19:31:37.353561 4942 generic.go:334] "Generic (PLEG): container finished" podID="83b97eec-f1b8-4205-933f-205e30caeec2" containerID="d49471940515dac44ca7b4deb7b69786b17d58c82210cbd128da7a4353fdc212" exitCode=0 Feb 18 19:31:37 crc kubenswrapper[4942]: I0218 19:31:37.353808 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerDied","Data":"d49471940515dac44ca7b4deb7b69786b17d58c82210cbd128da7a4353fdc212"} Feb 18 19:31:37 crc kubenswrapper[4942]: I0218 19:31:37.353831 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerStarted","Data":"e90c667153875bf407511ed88e15dc632e46a63fd6b238de865623e6e16e6e1a"} Feb 18 19:31:38 crc kubenswrapper[4942]: I0218 19:31:38.362256 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerStarted","Data":"63a88ca6ca33dff5e2ab3bac904d6ef958cd2ba371f5bb8c57b0d9a89c8d0c4e"} Feb 18 19:31:39 crc kubenswrapper[4942]: I0218 19:31:39.369780 4942 generic.go:334] "Generic (PLEG): container finished" podID="83b97eec-f1b8-4205-933f-205e30caeec2" containerID="63a88ca6ca33dff5e2ab3bac904d6ef958cd2ba371f5bb8c57b0d9a89c8d0c4e" exitCode=0 Feb 18 19:31:39 crc kubenswrapper[4942]: I0218 19:31:39.370028 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerDied","Data":"63a88ca6ca33dff5e2ab3bac904d6ef958cd2ba371f5bb8c57b0d9a89c8d0c4e"} Feb 18 19:31:40 crc kubenswrapper[4942]: I0218 19:31:40.379223 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerStarted","Data":"801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de"} Feb 18 19:31:40 crc kubenswrapper[4942]: I0218 19:31:40.402238 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-928mg" podStartSLOduration=2.8837019699999997 podStartE2EDuration="5.402219007s" podCreationTimestamp="2026-02-18 19:31:35 +0000 UTC" firstStartedPulling="2026-02-18 19:31:37.355387757 +0000 UTC m=+857.060320422" lastFinishedPulling="2026-02-18 19:31:39.873904794 +0000 UTC m=+859.578837459" observedRunningTime="2026-02-18 19:31:40.39673684 +0000 UTC m=+860.101669505" watchObservedRunningTime="2026-02-18 19:31:40.402219007 +0000 UTC m=+860.107151682" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.040268 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.041799 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.043790 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lp758" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.045882 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.050389 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.051720 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rrtjz" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.071551 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.077714 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.078690 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.082030 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zgrcf" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.097403 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.113721 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.114708 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.131967 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-5zz9p" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.172392 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.174186 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g2gg\" (UniqueName: \"kubernetes.io/projected/51f45ea1-2b95-4553-9e3d-5e6bb4c8b862-kube-api-access-4g2gg\") pod \"cinder-operator-controller-manager-57746b5ff9-56k6g\" (UID: \"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.174303 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbdq\" (UniqueName: \"kubernetes.io/projected/829c57a8-54c3-43c5-8bea-2ceeeafeb143-kube-api-access-zgbdq\") pod \"barbican-operator-controller-manager-c4b7d6946-rvgp6\" (UID: \"829c57a8-54c3-43c5-8bea-2ceeeafeb143\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.174376 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4hbp\" (UniqueName: \"kubernetes.io/projected/844a0cad-5a6a-4ab4-8e32-388835eb9f4a-kube-api-access-n4hbp\") pod \"designate-operator-controller-manager-55cc45767f-26x4h\" (UID: \"844a0cad-5a6a-4ab4-8e32-388835eb9f4a\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.182834 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.183676 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.185068 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.195390 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mgvvk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.196546 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.203456 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.204275 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.216071 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ns786" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.228813 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.236404 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.237269 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.242739 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.243046 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ncxkw" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.246162 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.263778 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.264827 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.268296 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.269242 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276207 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276693 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g2gg\" (UniqueName: \"kubernetes.io/projected/51f45ea1-2b95-4553-9e3d-5e6bb4c8b862-kube-api-access-4g2gg\") pod \"cinder-operator-controller-manager-57746b5ff9-56k6g\" (UID: \"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276777 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsb7d\" (UniqueName: \"kubernetes.io/projected/8d849c9e-0da1-4910-9922-5ea2dd2728a2-kube-api-access-rsb7d\") pod \"glance-operator-controller-manager-68c6d499cb-g7kpv\" (UID: \"8d849c9e-0da1-4910-9922-5ea2dd2728a2\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276801 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7f4p\" (UniqueName: \"kubernetes.io/projected/4cbefad2-6c6d-4b7b-bba9-acf857a54a4b-kube-api-access-c7f4p\") pod \"heat-operator-controller-manager-9595d6797-xrzwv\" (UID: \"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276819 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgbdq\" (UniqueName: \"kubernetes.io/projected/829c57a8-54c3-43c5-8bea-2ceeeafeb143-kube-api-access-zgbdq\") pod \"barbican-operator-controller-manager-c4b7d6946-rvgp6\" (UID: \"829c57a8-54c3-43c5-8bea-2ceeeafeb143\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4hbp\" (UniqueName: \"kubernetes.io/projected/844a0cad-5a6a-4ab4-8e32-388835eb9f4a-kube-api-access-n4hbp\") pod \"designate-operator-controller-manager-55cc45767f-26x4h\" (UID: \"844a0cad-5a6a-4ab4-8e32-388835eb9f4a\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.278755 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-q9kfx" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.278931 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9v4zz" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.312812 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.313587 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.333645 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g2gg\" (UniqueName: \"kubernetes.io/projected/51f45ea1-2b95-4553-9e3d-5e6bb4c8b862-kube-api-access-4g2gg\") pod \"cinder-operator-controller-manager-57746b5ff9-56k6g\" (UID: \"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.333662 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgbdq\" (UniqueName: \"kubernetes.io/projected/829c57a8-54c3-43c5-8bea-2ceeeafeb143-kube-api-access-zgbdq\") pod \"barbican-operator-controller-manager-c4b7d6946-rvgp6\" (UID: \"829c57a8-54c3-43c5-8bea-2ceeeafeb143\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.334421 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cxw8x" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.339817 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.346401 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4hbp\" (UniqueName: \"kubernetes.io/projected/844a0cad-5a6a-4ab4-8e32-388835eb9f4a-kube-api-access-n4hbp\") pod \"designate-operator-controller-manager-55cc45767f-26x4h\" (UID: \"844a0cad-5a6a-4ab4-8e32-388835eb9f4a\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.353832 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.370941 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.379812 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.379858 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w4v6\" (UniqueName: \"kubernetes.io/projected/80bc5b9b-00c2-4003-8279-1dbc3ff3aa05-kube-api-access-8w4v6\") pod \"horizon-operator-controller-manager-54fb488b88-9gjbj\" (UID: \"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.379925 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsb7d\" (UniqueName: \"kubernetes.io/projected/8d849c9e-0da1-4910-9922-5ea2dd2728a2-kube-api-access-rsb7d\") pod \"glance-operator-controller-manager-68c6d499cb-g7kpv\" (UID: \"8d849c9e-0da1-4910-9922-5ea2dd2728a2\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.379946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7f4p\" (UniqueName: \"kubernetes.io/projected/4cbefad2-6c6d-4b7b-bba9-acf857a54a4b-kube-api-access-c7f4p\") pod \"heat-operator-controller-manager-9595d6797-xrzwv\" (UID: \"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.380006 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz6ht\" (UniqueName: \"kubernetes.io/projected/230a2167-e078-48a6-93ce-84a37ff4ac02-kube-api-access-zz6ht\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.380030 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n49wh\" (UniqueName: \"kubernetes.io/projected/11715b33-f996-46bf-81db-0557e84e7fea-kube-api-access-n49wh\") pod \"keystone-operator-controller-manager-6c78d668d5-t9dzq\" (UID: \"11715b33-f996-46bf-81db-0557e84e7fea\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.380046 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2bnw\" (UniqueName: \"kubernetes.io/projected/1e73a8a0-3246-4a08-b4be-d587d82742a4-kube-api-access-s2bnw\") pod \"ironic-operator-controller-manager-6494cdbf8f-9qvzl\" (UID: \"1e73a8a0-3246-4a08-b4be-d587d82742a4\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.386414 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.387631 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.388813 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.391691 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gq7ft" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.399383 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.400526 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.410638 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7f4p\" (UniqueName: \"kubernetes.io/projected/4cbefad2-6c6d-4b7b-bba9-acf857a54a4b-kube-api-access-c7f4p\") pod \"heat-operator-controller-manager-9595d6797-xrzwv\" (UID: \"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.411122 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pvgms" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.416205 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.435599 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsb7d\" (UniqueName: \"kubernetes.io/projected/8d849c9e-0da1-4910-9922-5ea2dd2728a2-kube-api-access-rsb7d\") pod \"glance-operator-controller-manager-68c6d499cb-g7kpv\" (UID: \"8d849c9e-0da1-4910-9922-5ea2dd2728a2\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.458825 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.479205 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.481668 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.482956 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz6ht\" (UniqueName: \"kubernetes.io/projected/230a2167-e078-48a6-93ce-84a37ff4ac02-kube-api-access-zz6ht\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483007 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n49wh\" (UniqueName: \"kubernetes.io/projected/11715b33-f996-46bf-81db-0557e84e7fea-kube-api-access-n49wh\") pod \"keystone-operator-controller-manager-6c78d668d5-t9dzq\" (UID: \"11715b33-f996-46bf-81db-0557e84e7fea\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483027 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2bnw\" (UniqueName: \"kubernetes.io/projected/1e73a8a0-3246-4a08-b4be-d587d82742a4-kube-api-access-s2bnw\") pod \"ironic-operator-controller-manager-6494cdbf8f-9qvzl\" (UID: \"1e73a8a0-3246-4a08-b4be-d587d82742a4\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483047 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483069 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w4v6\" (UniqueName: \"kubernetes.io/projected/80bc5b9b-00c2-4003-8279-1dbc3ff3aa05-kube-api-access-8w4v6\") pod \"horizon-operator-controller-manager-54fb488b88-9gjbj\" (UID: \"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483112 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89l7l\" (UniqueName: \"kubernetes.io/projected/9d43a851-2d6c-4fe9-86e1-04c7d382b257-kube-api-access-89l7l\") pod \"neutron-operator-controller-manager-54967dbbdf-tzn65\" (UID: \"9d43a851-2d6c-4fe9-86e1-04c7d382b257\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483145 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjwf\" (UniqueName: \"kubernetes.io/projected/a15b8ac2-0742-4fd7-9a14-005620c93a3d-kube-api-access-mgjwf\") pod \"manila-operator-controller-manager-96fff9cb8-qs9mb\" (UID: \"a15b8ac2-0742-4fd7-9a14-005620c93a3d\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: E0218 19:31:44.483287 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:44 crc kubenswrapper[4942]: E0218 19:31:44.483331 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:44.983313132 +0000 UTC m=+864.688245797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.489477 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.490279 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.501383 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cmcch" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.513197 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.522917 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.525890 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.524397 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2bnw\" (UniqueName: \"kubernetes.io/projected/1e73a8a0-3246-4a08-b4be-d587d82742a4-kube-api-access-s2bnw\") pod \"ironic-operator-controller-manager-6494cdbf8f-9qvzl\" (UID: \"1e73a8a0-3246-4a08-b4be-d587d82742a4\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.527790 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz6ht\" (UniqueName: \"kubernetes.io/projected/230a2167-e078-48a6-93ce-84a37ff4ac02-kube-api-access-zz6ht\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.530016 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-t6cn7" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.535047 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n49wh\" (UniqueName: \"kubernetes.io/projected/11715b33-f996-46bf-81db-0557e84e7fea-kube-api-access-n49wh\") pod \"keystone-operator-controller-manager-6c78d668d5-t9dzq\" (UID: \"11715b33-f996-46bf-81db-0557e84e7fea\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.542382 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.548656 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w4v6\" (UniqueName: \"kubernetes.io/projected/80bc5b9b-00c2-4003-8279-1dbc3ff3aa05-kube-api-access-8w4v6\") pod \"horizon-operator-controller-manager-54fb488b88-9gjbj\" (UID: \"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.584609 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktdt\" (UniqueName: \"kubernetes.io/projected/cde9a09e-2dfe-410e-95ad-8f297b517ef4-kube-api-access-hktdt\") pod \"nova-operator-controller-manager-5ddd85db87-5jzdp\" (UID: \"cde9a09e-2dfe-410e-95ad-8f297b517ef4\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.584661 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89l7l\" (UniqueName: \"kubernetes.io/projected/9d43a851-2d6c-4fe9-86e1-04c7d382b257-kube-api-access-89l7l\") pod \"neutron-operator-controller-manager-54967dbbdf-tzn65\" (UID: \"9d43a851-2d6c-4fe9-86e1-04c7d382b257\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.584684 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjwf\" (UniqueName: \"kubernetes.io/projected/a15b8ac2-0742-4fd7-9a14-005620c93a3d-kube-api-access-mgjwf\") pod \"manila-operator-controller-manager-96fff9cb8-qs9mb\" (UID: \"a15b8ac2-0742-4fd7-9a14-005620c93a3d\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.584724 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngp72\" (UniqueName: \"kubernetes.io/projected/c2cc0d22-92b6-4c67-9627-79abffb9917c-kube-api-access-ngp72\") pod \"mariadb-operator-controller-manager-66997756f6-f8nnp\" (UID: \"c2cc0d22-92b6-4c67-9627-79abffb9917c\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.603037 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.608524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjwf\" (UniqueName: \"kubernetes.io/projected/a15b8ac2-0742-4fd7-9a14-005620c93a3d-kube-api-access-mgjwf\") pod \"manila-operator-controller-manager-96fff9cb8-qs9mb\" (UID: \"a15b8ac2-0742-4fd7-9a14-005620c93a3d\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.618075 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89l7l\" (UniqueName: \"kubernetes.io/projected/9d43a851-2d6c-4fe9-86e1-04c7d382b257-kube-api-access-89l7l\") pod \"neutron-operator-controller-manager-54967dbbdf-tzn65\" (UID: \"9d43a851-2d6c-4fe9-86e1-04c7d382b257\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.618244 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.633230 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.634191 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.638847 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.638872 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tnb9n" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.642602 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.696881 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.701565 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktdt\" (UniqueName: \"kubernetes.io/projected/cde9a09e-2dfe-410e-95ad-8f297b517ef4-kube-api-access-hktdt\") pod \"nova-operator-controller-manager-5ddd85db87-5jzdp\" (UID: \"cde9a09e-2dfe-410e-95ad-8f297b517ef4\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.701665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngp72\" (UniqueName: \"kubernetes.io/projected/c2cc0d22-92b6-4c67-9627-79abffb9917c-kube-api-access-ngp72\") pod \"mariadb-operator-controller-manager-66997756f6-f8nnp\" (UID: \"c2cc0d22-92b6-4c67-9627-79abffb9917c\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.701718 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xtk\" (UniqueName: \"kubernetes.io/projected/3b42f10c-a162-4d74-9eed-b6c3ef08cdb7-kube-api-access-28xtk\") pod \"octavia-operator-controller-manager-745bbbd77b-4xhmd\" (UID: \"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.704498 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.716581 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6vthr" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.725116 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.738793 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngp72\" (UniqueName: \"kubernetes.io/projected/c2cc0d22-92b6-4c67-9627-79abffb9917c-kube-api-access-ngp72\") pod \"mariadb-operator-controller-manager-66997756f6-f8nnp\" (UID: \"c2cc0d22-92b6-4c67-9627-79abffb9917c\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.746182 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.746374 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktdt\" (UniqueName: \"kubernetes.io/projected/cde9a09e-2dfe-410e-95ad-8f297b517ef4-kube-api-access-hktdt\") pod \"nova-operator-controller-manager-5ddd85db87-5jzdp\" (UID: \"cde9a09e-2dfe-410e-95ad-8f297b517ef4\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.749645 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.750585 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.754553 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.755418 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-h8dd5" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.762350 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.763427 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.774999 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.776180 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-f27rp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.789653 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.791691 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.797638 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.802667 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28xtk\" (UniqueName: \"kubernetes.io/projected/3b42f10c-a162-4d74-9eed-b6c3ef08cdb7-kube-api-access-28xtk\") pod \"octavia-operator-controller-manager-745bbbd77b-4xhmd\" (UID: \"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.802852 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmr8t\" (UniqueName: \"kubernetes.io/projected/716e0e70-0ef0-4843-9ad3-d84f47a3397f-kube-api-access-jmr8t\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.802904 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8w6c\" (UniqueName: \"kubernetes.io/projected/df8c140d-a735-4a14-8239-67f577546e01-kube-api-access-s8w6c\") pod \"ovn-operator-controller-manager-85c99d655-6kt98\" (UID: \"df8c140d-a735-4a14-8239-67f577546e01\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.802924 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.818443 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.821207 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.830513 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-df9tk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.838081 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.848706 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.849283 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.855585 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.856694 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.866497 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-v92qj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.887234 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28xtk\" (UniqueName: \"kubernetes.io/projected/3b42f10c-a162-4d74-9eed-b6c3ef08cdb7-kube-api-access-28xtk\") pod \"octavia-operator-controller-manager-745bbbd77b-4xhmd\" (UID: \"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.889922 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907515 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmr8t\" (UniqueName: \"kubernetes.io/projected/716e0e70-0ef0-4843-9ad3-d84f47a3397f-kube-api-access-jmr8t\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907562 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8w6c\" (UniqueName: \"kubernetes.io/projected/df8c140d-a735-4a14-8239-67f577546e01-kube-api-access-s8w6c\") pod \"ovn-operator-controller-manager-85c99d655-6kt98\" (UID: \"df8c140d-a735-4a14-8239-67f577546e01\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907586 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907664 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxjt\" (UniqueName: \"kubernetes.io/projected/8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377-kube-api-access-8kxjt\") pod \"placement-operator-controller-manager-57bd55f9b7-cg225\" (UID: \"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907687 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppsnj\" (UniqueName: \"kubernetes.io/projected/6618726f-c93c-4d05-b6d9-a08aca84801f-kube-api-access-ppsnj\") pod \"swift-operator-controller-manager-79558bbfbf-r8hvr\" (UID: \"6618726f-c93c-4d05-b6d9-a08aca84801f\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:44 crc kubenswrapper[4942]: E0218 19:31:44.908011 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:44 crc kubenswrapper[4942]: E0218 19:31:44.908084 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:31:45.408062218 +0000 UTC m=+865.112994983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.912808 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.926687 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.927902 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.931193 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.932392 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-52wtk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.935990 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8w6c\" (UniqueName: \"kubernetes.io/projected/df8c140d-a735-4a14-8239-67f577546e01-kube-api-access-s8w6c\") pod \"ovn-operator-controller-manager-85c99d655-6kt98\" (UID: \"df8c140d-a735-4a14-8239-67f577546e01\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.943818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.948830 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmr8t\" (UniqueName: \"kubernetes.io/projected/716e0e70-0ef0-4843-9ad3-d84f47a3397f-kube-api-access-jmr8t\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.973344 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.974273 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.976510 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.976523 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fshwm" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.976336 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.985495 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:44.999616 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.000841 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.005497 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-c9tgl" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.011910 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.011956 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6wkc\" (UniqueName: \"kubernetes.io/projected/a65b16e4-f55f-427a-a629-2fbff014a7af-kube-api-access-m6wkc\") pod \"telemetry-operator-controller-manager-56dc67d744-hhjwz\" (UID: \"a65b16e4-f55f-427a-a629-2fbff014a7af\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.011978 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxjt\" (UniqueName: \"kubernetes.io/projected/8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377-kube-api-access-8kxjt\") pod \"placement-operator-controller-manager-57bd55f9b7-cg225\" (UID: \"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.011998 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppsnj\" (UniqueName: \"kubernetes.io/projected/6618726f-c93c-4d05-b6d9-a08aca84801f-kube-api-access-ppsnj\") pod \"swift-operator-controller-manager-79558bbfbf-r8hvr\" (UID: \"6618726f-c93c-4d05-b6d9-a08aca84801f\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.012025 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwh79\" (UniqueName: \"kubernetes.io/projected/2fda65c9-97fe-4689-bd35-7f7974841223-kube-api-access-lwh79\") pod \"test-operator-controller-manager-8467ccb4c8-shr4v\" (UID: \"2fda65c9-97fe-4689-bd35-7f7974841223\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.012150 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.012186 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:46.012173059 +0000 UTC m=+865.717105714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.012286 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.029645 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxjt\" (UniqueName: \"kubernetes.io/projected/8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377-kube-api-access-8kxjt\") pod \"placement-operator-controller-manager-57bd55f9b7-cg225\" (UID: \"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.030109 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppsnj\" (UniqueName: \"kubernetes.io/projected/6618726f-c93c-4d05-b6d9-a08aca84801f-kube-api-access-ppsnj\") pod \"swift-operator-controller-manager-79558bbfbf-r8hvr\" (UID: \"6618726f-c93c-4d05-b6d9-a08aca84801f\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.033891 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.055005 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.079864 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.083828 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.112362 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114750 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114820 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkx4q\" (UniqueName: \"kubernetes.io/projected/250062ed-a35d-489a-a6b5-e6f96d1532d6-kube-api-access-xkx4q\") pod \"watcher-operator-controller-manager-c8b4db7df-h9q84\" (UID: \"250062ed-a35d-489a-a6b5-e6f96d1532d6\") " pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114889 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfw9c\" (UniqueName: \"kubernetes.io/projected/0f7a5f35-f6e0-4f17-a380-13e8718ba658-kube-api-access-pfw9c\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114953 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v687s\" (UniqueName: \"kubernetes.io/projected/5fe849cd-ac9e-48bb-a7dd-f7f529a324e3-kube-api-access-v687s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wvj72\" (UID: \"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114983 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6wkc\" (UniqueName: \"kubernetes.io/projected/a65b16e4-f55f-427a-a629-2fbff014a7af-kube-api-access-m6wkc\") pod \"telemetry-operator-controller-manager-56dc67d744-hhjwz\" (UID: \"a65b16e4-f55f-427a-a629-2fbff014a7af\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.115039 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwh79\" (UniqueName: \"kubernetes.io/projected/2fda65c9-97fe-4689-bd35-7f7974841223-kube-api-access-lwh79\") pod \"test-operator-controller-manager-8467ccb4c8-shr4v\" (UID: \"2fda65c9-97fe-4689-bd35-7f7974841223\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.115061 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.120079 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.135250 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6wkc\" (UniqueName: \"kubernetes.io/projected/a65b16e4-f55f-427a-a629-2fbff014a7af-kube-api-access-m6wkc\") pod \"telemetry-operator-controller-manager-56dc67d744-hhjwz\" (UID: \"a65b16e4-f55f-427a-a629-2fbff014a7af\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.140504 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwh79\" (UniqueName: \"kubernetes.io/projected/2fda65c9-97fe-4689-bd35-7f7974841223-kube-api-access-lwh79\") pod \"test-operator-controller-manager-8467ccb4c8-shr4v\" (UID: \"2fda65c9-97fe-4689-bd35-7f7974841223\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.143941 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.197474 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.203467 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod844a0cad_5a6a_4ab4_8e32_388835eb9f4a.slice/crio-491ea45a82cb77025866c40f2868170dcf34cae892f581557b64cce1088a204e WatchSource:0}: Error finding container 491ea45a82cb77025866c40f2868170dcf34cae892f581557b64cce1088a204e: Status 404 returned error can't find the container with id 491ea45a82cb77025866c40f2868170dcf34cae892f581557b64cce1088a204e Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216225 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216622 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v687s\" (UniqueName: \"kubernetes.io/projected/5fe849cd-ac9e-48bb-a7dd-f7f529a324e3-kube-api-access-v687s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wvj72\" (UID: \"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216680 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216706 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216729 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkx4q\" (UniqueName: \"kubernetes.io/projected/250062ed-a35d-489a-a6b5-e6f96d1532d6-kube-api-access-xkx4q\") pod \"watcher-operator-controller-manager-c8b4db7df-h9q84\" (UID: \"250062ed-a35d-489a-a6b5-e6f96d1532d6\") " pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216784 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfw9c\" (UniqueName: \"kubernetes.io/projected/0f7a5f35-f6e0-4f17-a380-13e8718ba658-kube-api-access-pfw9c\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.217199 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.217241 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:45.717225753 +0000 UTC m=+865.422158418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.217341 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.217382 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:45.717367597 +0000 UTC m=+865.422300262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.233706 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfw9c\" (UniqueName: \"kubernetes.io/projected/0f7a5f35-f6e0-4f17-a380-13e8718ba658-kube-api-access-pfw9c\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.235646 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkx4q\" (UniqueName: \"kubernetes.io/projected/250062ed-a35d-489a-a6b5-e6f96d1532d6-kube-api-access-xkx4q\") pod \"watcher-operator-controller-manager-c8b4db7df-h9q84\" (UID: \"250062ed-a35d-489a-a6b5-e6f96d1532d6\") " pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.235865 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v687s\" (UniqueName: \"kubernetes.io/projected/5fe849cd-ac9e-48bb-a7dd-f7f529a324e3-kube-api-access-v687s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wvj72\" (UID: \"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.250851 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.313527 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.383040 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.419295 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.419373 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.419432 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:31:46.419416345 +0000 UTC m=+866.124349080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.432609 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" event={"ID":"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b","Type":"ContainerStarted","Data":"bf3ca6b57890b06540c918f1f74d9fdd6429bf4d4eb1333bb3f3cc33715ae771"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.443537 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" event={"ID":"844a0cad-5a6a-4ab4-8e32-388835eb9f4a","Type":"ContainerStarted","Data":"491ea45a82cb77025866c40f2868170dcf34cae892f581557b64cce1088a204e"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.446100 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" event={"ID":"8d849c9e-0da1-4910-9922-5ea2dd2728a2","Type":"ContainerStarted","Data":"e63ee5cbb8fcb10c5613d21b8cf6969d33e6f10cb4401c3e53c0b50f03f9e36b"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.447038 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" event={"ID":"829c57a8-54c3-43c5-8bea-2ceeeafeb143","Type":"ContainerStarted","Data":"45d1768bbbe9899b96b394eebe9b15dcd5ebd49b77d371d6a68706ae71b29b43"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.448127 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" event={"ID":"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862","Type":"ContainerStarted","Data":"209f319da30a65f3066f8ac773346a71c1c2f2f39926654cd7691583ffb3e8b5"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.449488 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" event={"ID":"1e73a8a0-3246-4a08-b4be-d587d82742a4","Type":"ContainerStarted","Data":"dd27502d63022ab3e4e0f450c4d42036958bc1c8d44bf2a67fc397b583d73b8d"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.449706 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.509283 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.515682 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.685862 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.688673 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d43a851_2d6c_4fe9_86e1_04c7d382b257.slice/crio-ba3e9924d1af9a2e2cdbe04a4e584f303b04aa5530466114eb7031ec34dab3f3 WatchSource:0}: Error finding container ba3e9924d1af9a2e2cdbe04a4e584f303b04aa5530466114eb7031ec34dab3f3: Status 404 returned error can't find the container with id ba3e9924d1af9a2e2cdbe04a4e584f303b04aa5530466114eb7031ec34dab3f3 Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.692697 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.700709 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.700736 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcde9a09e_2dfe_410e_95ad_8f297b517ef4.slice/crio-09fdc5a3f7408d0b9f325701dd398de889bf4e14ba86cfe3b8aa640d252a8589 WatchSource:0}: Error finding container 09fdc5a3f7408d0b9f325701dd398de889bf4e14ba86cfe3b8aa640d252a8589: Status 404 returned error can't find the container with id 09fdc5a3f7408d0b9f325701dd398de889bf4e14ba86cfe3b8aa640d252a8589 Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.702415 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b42f10c_a162_4d74_9eed_b6c3ef08cdb7.slice/crio-4984f32847d5cbdc32f5da53ccaef59634d1a913dade134311d80a7ca8917838 WatchSource:0}: Error finding container 4984f32847d5cbdc32f5da53ccaef59634d1a913dade134311d80a7ca8917838: Status 404 returned error can't find the container with id 4984f32847d5cbdc32f5da53ccaef59634d1a913dade134311d80a7ca8917838 Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.706170 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.711928 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.718278 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf8c140d_a735_4a14_8239_67f577546e01.slice/crio-be7a5234ea1ca872ac886838a591ce75b8185eb313f72b20a206db6baab7ffe9 WatchSource:0}: Error finding container be7a5234ea1ca872ac886838a591ce75b8185eb313f72b20a206db6baab7ffe9: Status 404 returned error can't find the container with id be7a5234ea1ca872ac886838a591ce75b8185eb313f72b20a206db6baab7ffe9 Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.726016 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.726167 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.726181 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.726239 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:46.72622112 +0000 UTC m=+866.431153785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.726287 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.726328 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:46.726313253 +0000 UTC m=+866.431245918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.889136 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.901427 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.906895 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda65b16e4_f55f_427a_a629_2fbff014a7af.slice/crio-660a930740e1418c2dc360720ea1bf3998a8820c3fcb64645cac6ec6ee627413 WatchSource:0}: Error finding container 660a930740e1418c2dc360720ea1bf3998a8820c3fcb64645cac6ec6ee627413: Status 404 returned error can't find the container with id 660a930740e1418c2dc360720ea1bf3998a8820c3fcb64645cac6ec6ee627413 Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.910281 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.915789 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6618726f_c93c_4d05_b6d9_a08aca84801f.slice/crio-df22e4522fc4389f2ac239a4c109904e7123c68fb8dd4fab45d9bbe2031a749e WatchSource:0}: Error finding container df22e4522fc4389f2ac239a4c109904e7123c68fb8dd4fab45d9bbe2031a749e: Status 404 returned error can't find the container with id df22e4522fc4389f2ac239a4c109904e7123c68fb8dd4fab45d9bbe2031a749e Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.917239 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.919119 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ca2018a_1b2e_4fa2_8564_3e2a0d3d8377.slice/crio-638566932c2db92780adfcd405cd6d4ed321369db99a3902aad20533c069cb19 WatchSource:0}: Error finding container 638566932c2db92780adfcd405cd6d4ed321369db99a3902aad20533c069cb19: Status 404 returned error can't find the container with id 638566932c2db92780adfcd405cd6d4ed321369db99a3902aad20533c069cb19 Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.921202 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kxjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-cg225_openstack-operators(8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.920376 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppsnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-79558bbfbf-r8hvr_openstack-operators(6618726f-c93c-4d05-b6d9-a08aca84801f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.922441 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" podUID="8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.923291 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" podUID="6618726f-c93c-4d05-b6d9-a08aca84801f" Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.924346 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fda65c9_97fe_4689_bd35_7f7974841223.slice/crio-03424b38b3c0f673d58b17261e271e3dc671cb717c527775f272b6b7f16ee4e9 WatchSource:0}: Error finding container 03424b38b3c0f673d58b17261e271e3dc671cb717c527775f272b6b7f16ee4e9: Status 404 returned error can't find the container with id 03424b38b3c0f673d58b17261e271e3dc671cb717c527775f272b6b7f16ee4e9 Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.926753 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lwh79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-shr4v_openstack-operators(2fda65c9-97fe-4689-bd35-7f7974841223): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.928196 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" podUID="2fda65c9-97fe-4689-bd35-7f7974841223" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.024818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84"] Feb 18 19:31:46 crc kubenswrapper[4942]: W0218 19:31:46.025697 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod250062ed_a35d_489a_a6b5_e6f96d1532d6.slice/crio-9a53e41ecd9bc15ba96074727586423e878c0a93bc1ce3ac75ba8f7ba5e61636 WatchSource:0}: Error finding container 9a53e41ecd9bc15ba96074727586423e878c0a93bc1ce3ac75ba8f7ba5e61636: Status 404 returned error can't find the container with id 9a53e41ecd9bc15ba96074727586423e878c0a93bc1ce3ac75ba8f7ba5e61636 Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.029737 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72"] Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.031033 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.031311 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.031365 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:48.031348155 +0000 UTC m=+867.736280820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.031389 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.12:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkx4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-c8b4db7df-h9q84_openstack-operators(250062ed-a35d-489a-a6b5-e6f96d1532d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:46 crc kubenswrapper[4942]: W0218 19:31:46.031707 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe849cd_ac9e_48bb_a7dd_f7f529a324e3.slice/crio-513691ae5bc9e0537d5f7df8632eefb88aab16e7968a1a7710409c8cb9269a3f WatchSource:0}: Error finding container 513691ae5bc9e0537d5f7df8632eefb88aab16e7968a1a7710409c8cb9269a3f: Status 404 returned error can't find the container with id 513691ae5bc9e0537d5f7df8632eefb88aab16e7968a1a7710409c8cb9269a3f Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.032861 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" podUID="250062ed-a35d-489a-a6b5-e6f96d1532d6" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.037103 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v687s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wvj72_openstack-operators(5fe849cd-ac9e-48bb-a7dd-f7f529a324e3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.038271 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" podUID="5fe849cd-ac9e-48bb-a7dd-f7f529a324e3" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.187657 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.187706 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.242720 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.438789 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.438971 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.439048 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:31:48.439030072 +0000 UTC m=+868.143962737 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.460665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" event={"ID":"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7","Type":"ContainerStarted","Data":"4984f32847d5cbdc32f5da53ccaef59634d1a913dade134311d80a7ca8917838"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.462633 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" event={"ID":"a15b8ac2-0742-4fd7-9a14-005620c93a3d","Type":"ContainerStarted","Data":"1657474b3474d4bfd82ff0e36e34cba7d2fb16e42e519c3466e6987d12ee549f"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.464633 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" event={"ID":"a65b16e4-f55f-427a-a629-2fbff014a7af","Type":"ContainerStarted","Data":"660a930740e1418c2dc360720ea1bf3998a8820c3fcb64645cac6ec6ee627413"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.465945 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" event={"ID":"6618726f-c93c-4d05-b6d9-a08aca84801f","Type":"ContainerStarted","Data":"df22e4522fc4389f2ac239a4c109904e7123c68fb8dd4fab45d9bbe2031a749e"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.466987 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" event={"ID":"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377","Type":"ContainerStarted","Data":"638566932c2db92780adfcd405cd6d4ed321369db99a3902aad20533c069cb19"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.468641 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" event={"ID":"2fda65c9-97fe-4689-bd35-7f7974841223","Type":"ContainerStarted","Data":"03424b38b3c0f673d58b17261e271e3dc671cb717c527775f272b6b7f16ee4e9"} Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.469193 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" podUID="8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.469944 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" podUID="2fda65c9-97fe-4689-bd35-7f7974841223" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.470019 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" event={"ID":"c2cc0d22-92b6-4c67-9627-79abffb9917c","Type":"ContainerStarted","Data":"de8058e26faf64d71c41c756de3bf8fc81b29468e63fa447937d48b21eada35e"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.471750 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" event={"ID":"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05","Type":"ContainerStarted","Data":"fbd2bd82e4c0a6dbd816d36a53c5abdf89588dbd915ee9fed90d1d6f3f25640e"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.474437 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" event={"ID":"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3","Type":"ContainerStarted","Data":"513691ae5bc9e0537d5f7df8632eefb88aab16e7968a1a7710409c8cb9269a3f"} Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.478153 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" podUID="5fe849cd-ac9e-48bb-a7dd-f7f529a324e3" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.480617 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" podUID="6618726f-c93c-4d05-b6d9-a08aca84801f" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.482015 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" event={"ID":"250062ed-a35d-489a-a6b5-e6f96d1532d6","Type":"ContainerStarted","Data":"9a53e41ecd9bc15ba96074727586423e878c0a93bc1ce3ac75ba8f7ba5e61636"} Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.493748 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.12:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" podUID="250062ed-a35d-489a-a6b5-e6f96d1532d6" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.501479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" event={"ID":"cde9a09e-2dfe-410e-95ad-8f297b517ef4","Type":"ContainerStarted","Data":"09fdc5a3f7408d0b9f325701dd398de889bf4e14ba86cfe3b8aa640d252a8589"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.502781 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" event={"ID":"df8c140d-a735-4a14-8239-67f577546e01","Type":"ContainerStarted","Data":"be7a5234ea1ca872ac886838a591ce75b8185eb313f72b20a206db6baab7ffe9"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.503922 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" event={"ID":"9d43a851-2d6c-4fe9-86e1-04c7d382b257","Type":"ContainerStarted","Data":"ba3e9924d1af9a2e2cdbe04a4e584f303b04aa5530466114eb7031ec34dab3f3"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.505458 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" event={"ID":"11715b33-f996-46bf-81db-0557e84e7fea","Type":"ContainerStarted","Data":"a612a48db18acec5544ca2aac968fd95a3b6e878e7a0f9cc01fe74af053edc8a"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.562498 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.609588 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.745225 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.745385 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.745525 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:48.745502 +0000 UTC m=+868.450434665 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.745605 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.745650 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:48.745637243 +0000 UTC m=+868.450569918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.745448 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.517113 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" podUID="2fda65c9-97fe-4689-bd35-7f7974841223" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.517535 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" podUID="8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.517674 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" podUID="6618726f-c93c-4d05-b6d9-a08aca84801f" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.517679 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.12:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" podUID="250062ed-a35d-489a-a6b5-e6f96d1532d6" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.518605 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" podUID="5fe849cd-ac9e-48bb-a7dd-f7f529a324e3" Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.082622 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.082695 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.082823 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:52.082806637 +0000 UTC m=+871.787739302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.488877 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.489064 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.489113 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:31:52.489098729 +0000 UTC m=+872.194031384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.522991 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-928mg" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" containerID="cri-o://801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" gracePeriod=2 Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.792945 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.793071 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.793242 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.793281 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.793373 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:52.793342271 +0000 UTC m=+872.498274976 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.793410 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:52.793391402 +0000 UTC m=+872.498324107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:49 crc kubenswrapper[4942]: I0218 19:31:49.532654 4942 generic.go:334] "Generic (PLEG): container finished" podID="83b97eec-f1b8-4205-933f-205e30caeec2" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" exitCode=0 Feb 18 19:31:49 crc kubenswrapper[4942]: I0218 19:31:49.532692 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerDied","Data":"801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de"} Feb 18 19:31:52 crc kubenswrapper[4942]: I0218 19:31:52.146860 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.147066 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.147821 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:00.147791738 +0000 UTC m=+879.852724443 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: I0218 19:31:52.553953 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.554092 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.554171 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:32:00.554150552 +0000 UTC m=+880.259083217 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: I0218 19:31:52.857848 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:52 crc kubenswrapper[4942]: I0218 19:31:52.857925 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.858141 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.858201 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:00.858182617 +0000 UTC m=+880.563115302 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.858655 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.858696 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:00.85868286 +0000 UTC m=+880.563615535 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:56 crc kubenswrapper[4942]: E0218 19:31:56.186065 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de is running failed: container process not found" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 19:31:56 crc kubenswrapper[4942]: E0218 19:31:56.187073 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de is running failed: container process not found" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 19:31:56 crc kubenswrapper[4942]: E0218 19:31:56.187531 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de is running failed: container process not found" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 19:31:56 crc kubenswrapper[4942]: E0218 19:31:56.187555 4942 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-928mg" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" Feb 18 19:31:57 crc kubenswrapper[4942]: E0218 19:31:57.778971 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc" Feb 18 19:31:57 crc kubenswrapper[4942]: E0218 19:31:57.779147 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6wkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-hhjwz_openstack-operators(a65b16e4-f55f-427a-a629-2fbff014a7af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:31:57 crc kubenswrapper[4942]: E0218 19:31:57.780285 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" podUID="a65b16e4-f55f-427a-a629-2fbff014a7af" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.258307 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.258488 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89l7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54967dbbdf-tzn65_openstack-operators(9d43a851-2d6c-4fe9-86e1-04c7d382b257): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.259659 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" podUID="9d43a851-2d6c-4fe9-86e1-04c7d382b257" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.602924 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" podUID="a65b16e4-f55f-427a-a629-2fbff014a7af" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.604215 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" podUID="9d43a851-2d6c-4fe9-86e1-04c7d382b257" Feb 18 19:31:59 crc kubenswrapper[4942]: I0218 19:31:59.039816 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:32:00 crc kubenswrapper[4942]: I0218 19:32:00.175545 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:00 crc kubenswrapper[4942]: E0218 19:32:00.175712 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:32:00 crc kubenswrapper[4942]: E0218 19:32:00.175796 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:16.175754531 +0000 UTC m=+895.880687196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:32:00 crc kubenswrapper[4942]: I0218 19:32:00.580748 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.589868 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.851865 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.884023 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.884223 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.890340 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.891010 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.924492 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.693482 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.697487 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.718491 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.823557 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.823636 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.823724 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.924657 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.924735 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.924788 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.925543 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.925549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.944120 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:02.070805 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: E0218 19:32:03.174723 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:16b541cff6581510978343a1bdc152a07fafcafa420b604f19291858e3d25fee" Feb 18 19:32:04 crc kubenswrapper[4942]: E0218 19:32:03.174961 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:16b541cff6581510978343a1bdc152a07fafcafa420b604f19291858e3d25fee,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgjwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-96fff9cb8-qs9mb_openstack-operators(a15b8ac2-0742-4fd7-9a14-005620c93a3d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:32:04 crc kubenswrapper[4942]: E0218 19:32:03.177985 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" podUID="a15b8ac2-0742-4fd7-9a14-005620c93a3d" Feb 18 19:32:04 crc kubenswrapper[4942]: E0218 19:32:03.650642 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:16b541cff6581510978343a1bdc152a07fafcafa420b604f19291858e3d25fee\\\"\"" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" podUID="a15b8ac2-0742-4fd7-9a14-005620c93a3d" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.503746 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.582132 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") pod \"83b97eec-f1b8-4205-933f-205e30caeec2\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.582198 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") pod \"83b97eec-f1b8-4205-933f-205e30caeec2\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.582250 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") pod \"83b97eec-f1b8-4205-933f-205e30caeec2\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.587822 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m" (OuterVolumeSpecName: "kube-api-access-6kt9m") pod "83b97eec-f1b8-4205-933f-205e30caeec2" (UID: "83b97eec-f1b8-4205-933f-205e30caeec2"). InnerVolumeSpecName "kube-api-access-6kt9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.591377 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities" (OuterVolumeSpecName: "utilities") pod "83b97eec-f1b8-4205-933f-205e30caeec2" (UID: "83b97eec-f1b8-4205-933f-205e30caeec2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.629695 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83b97eec-f1b8-4205-933f-205e30caeec2" (UID: "83b97eec-f1b8-4205-933f-205e30caeec2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.658934 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerDied","Data":"e90c667153875bf407511ed88e15dc632e46a63fd6b238de865623e6e16e6e1a"} Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.658993 4942 scope.go:117] "RemoveContainer" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.659045 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.684316 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.684344 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.684354 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.709875 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.716538 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:32:05 crc kubenswrapper[4942]: I0218 19:32:05.056964 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" path="/var/lib/kubelet/pods/83b97eec-f1b8-4205-933f-205e30caeec2/volumes" Feb 18 19:32:05 crc kubenswrapper[4942]: E0218 19:32:05.954321 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469" Feb 18 19:32:05 crc kubenswrapper[4942]: E0218 19:32:05.955606 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n49wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c78d668d5-t9dzq_openstack-operators(11715b33-f996-46bf-81db-0557e84e7fea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:32:05 crc kubenswrapper[4942]: E0218 19:32:05.957616 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" podUID="11715b33-f996-46bf-81db-0557e84e7fea" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.416228 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.416400 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hktdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ddd85db87-5jzdp_openstack-operators(cde9a09e-2dfe-410e-95ad-8f297b517ef4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.417504 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" podUID="cde9a09e-2dfe-410e-95ad-8f297b517ef4" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.672542 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" podUID="11715b33-f996-46bf-81db-0557e84e7fea" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.672567 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" podUID="cde9a09e-2dfe-410e-95ad-8f297b517ef4" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.077860 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:07 crc kubenswrapper[4942]: E0218 19:32:07.078189 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="extract-content" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.078201 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="extract-content" Feb 18 19:32:07 crc kubenswrapper[4942]: E0218 19:32:07.078215 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="extract-utilities" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.078221 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="extract-utilities" Feb 18 19:32:07 crc kubenswrapper[4942]: E0218 19:32:07.078235 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.078242 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.078395 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.079383 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.105085 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.127105 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.127151 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.127493 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229039 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229110 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229609 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229815 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.250512 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.404807 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:10 crc kubenswrapper[4942]: I0218 19:32:10.018595 4942 scope.go:117] "RemoveContainer" containerID="63a88ca6ca33dff5e2ab3bac904d6ef958cd2ba371f5bb8c57b0d9a89c8d0c4e" Feb 18 19:32:10 crc kubenswrapper[4942]: I0218 19:32:10.811797 4942 scope.go:117] "RemoveContainer" containerID="d49471940515dac44ca7b4deb7b69786b17d58c82210cbd128da7a4353fdc212" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.080361 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9"] Feb 18 19:32:11 crc kubenswrapper[4942]: W0218 19:32:11.090971 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f7a5f35_f6e0_4f17_a380_13e8718ba658.slice/crio-90db3468cce7dfd35ccc773b5df3877a748faf704881c53896b478806d826b11 WatchSource:0}: Error finding container 90db3468cce7dfd35ccc773b5df3877a748faf704881c53896b478806d826b11: Status 404 returned error can't find the container with id 90db3468cce7dfd35ccc773b5df3877a748faf704881c53896b478806d826b11 Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.139271 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.147242 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk"] Feb 18 19:32:11 crc kubenswrapper[4942]: W0218 19:32:11.171505 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4cd7c2a_4d5f_48c2_9af4_bcc237367416.slice/crio-dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b WatchSource:0}: Error finding container dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b: Status 404 returned error can't find the container with id dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.304103 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.708978 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" event={"ID":"844a0cad-5a6a-4ab4-8e32-388835eb9f4a","Type":"ContainerStarted","Data":"c1bbdecfe782024ab3ee2b60bd247e6c6890af98dee050a37e909cd64cd9d960"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.709332 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.716196 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" event={"ID":"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05","Type":"ContainerStarted","Data":"7480a5995118be4d9ce6e060f3dcca85c3bf26dfca52129b531ff7ad10f4015a"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.716350 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.728602 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerStarted","Data":"d3a4dd0670baf5152526b789369cfc767a3b4f746a7ebf6b7d9421c87331aa1b"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.728644 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerStarted","Data":"5e3cbf30742449f377e56eae72b81f7947e381700f22252f373a197aae1f9b45"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.733539 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" event={"ID":"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3","Type":"ContainerStarted","Data":"390939293f8458f582c4596c1f4efb6dad4bd2551babc091f16fbd23e1fb4133"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.736105 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" event={"ID":"716e0e70-0ef0-4843-9ad3-d84f47a3397f","Type":"ContainerStarted","Data":"f56c83e0bb2fbdc3faa2568d196d341fbff48dae52eaf90f25beae7ad4410e7b"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.748231 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" event={"ID":"df8c140d-a735-4a14-8239-67f577546e01","Type":"ContainerStarted","Data":"e8309540c2de7c1091ee1196531e8a09c1f7fab1cce1e11ba5a88c193f81de4e"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.748378 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.753022 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" podStartSLOduration=6.554672404 podStartE2EDuration="27.753005599s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.229753487 +0000 UTC m=+864.934686142" lastFinishedPulling="2026-02-18 19:32:06.428086672 +0000 UTC m=+886.133019337" observedRunningTime="2026-02-18 19:32:11.74746303 +0000 UTC m=+891.452395695" watchObservedRunningTime="2026-02-18 19:32:11.753005599 +0000 UTC m=+891.457938254" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.757043 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" event={"ID":"c2cc0d22-92b6-4c67-9627-79abffb9917c","Type":"ContainerStarted","Data":"10db60d2baee30c4eb2de8561b1daf0856f78e5e40bb024a2f329ea2f85eb594"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.757737 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.762151 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerStarted","Data":"dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.775641 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" event={"ID":"1e73a8a0-3246-4a08-b4be-d587d82742a4","Type":"ContainerStarted","Data":"d92a181d09de29a842692b8d7e0a930f74576a1453b28ad85cb69f74d7c93806"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.775963 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.792307 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" event={"ID":"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b","Type":"ContainerStarted","Data":"680b16332f49059f660d9bb33e58365db2c2369f30b4ad6dbd1d1a7dbc47100d"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.792585 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.810016 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" event={"ID":"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377","Type":"ContainerStarted","Data":"0fb9b55866c91b5201a1184b05069132e435d7929f2a2d1cb55f7d78ec122461"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.810693 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.815865 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" event={"ID":"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862","Type":"ContainerStarted","Data":"b0487a7e832bf13b8b13552580da81bf5dcb7c629d72bf0966dab0b259a928e6"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.815908 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.830119 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" event={"ID":"0f7a5f35-f6e0-4f17-a380-13e8718ba658","Type":"ContainerStarted","Data":"90db3468cce7dfd35ccc773b5df3877a748faf704881c53896b478806d826b11"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.831783 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.855907 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" podStartSLOduration=7.114890696 podStartE2EDuration="27.85588791s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.68793886 +0000 UTC m=+865.392871525" lastFinishedPulling="2026-02-18 19:32:06.428936074 +0000 UTC m=+886.133868739" observedRunningTime="2026-02-18 19:32:11.84871382 +0000 UTC m=+891.553646495" watchObservedRunningTime="2026-02-18 19:32:11.85588791 +0000 UTC m=+891.560820575" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.860364 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" event={"ID":"8d849c9e-0da1-4910-9922-5ea2dd2728a2","Type":"ContainerStarted","Data":"ab2260185e477444de83cdabe59b88c988214cf068d9d7c979d902ecd79e09bf"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.861156 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.887040 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" event={"ID":"829c57a8-54c3-43c5-8bea-2ceeeafeb143","Type":"ContainerStarted","Data":"b63a72628a085f5b208d81f35b38e1e21b8ab3209207b7d224ee7bfff08b74f2"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.887682 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.885751 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" podStartSLOduration=3.121887411 podStartE2EDuration="27.885728848s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:46.036995496 +0000 UTC m=+865.741928161" lastFinishedPulling="2026-02-18 19:32:10.800836933 +0000 UTC m=+890.505769598" observedRunningTime="2026-02-18 19:32:11.883093322 +0000 UTC m=+891.588025987" watchObservedRunningTime="2026-02-18 19:32:11.885728848 +0000 UTC m=+891.590661523" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.910645 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" podStartSLOduration=6.556031368 podStartE2EDuration="27.910627813s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.074545954 +0000 UTC m=+864.779478619" lastFinishedPulling="2026-02-18 19:32:06.429142399 +0000 UTC m=+886.134075064" observedRunningTime="2026-02-18 19:32:11.907414622 +0000 UTC m=+891.612347287" watchObservedRunningTime="2026-02-18 19:32:11.910627813 +0000 UTC m=+891.615560478" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.949914 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" podStartSLOduration=6.9308956219999995 podStartE2EDuration="27.949892498s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.409117737 +0000 UTC m=+865.114050402" lastFinishedPulling="2026-02-18 19:32:06.428114603 +0000 UTC m=+886.133047278" observedRunningTime="2026-02-18 19:32:11.933368423 +0000 UTC m=+891.638301108" watchObservedRunningTime="2026-02-18 19:32:11.949892498 +0000 UTC m=+891.654825163" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.032984 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" podStartSLOduration=3.231510622 podStartE2EDuration="28.032961412s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.921086649 +0000 UTC m=+865.626019314" lastFinishedPulling="2026-02-18 19:32:10.722537439 +0000 UTC m=+890.427470104" observedRunningTime="2026-02-18 19:32:11.992377724 +0000 UTC m=+891.697310379" watchObservedRunningTime="2026-02-18 19:32:12.032961412 +0000 UTC m=+891.737894077" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.046863 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" podStartSLOduration=6.692924812 podStartE2EDuration="28.04684801s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.073885957 +0000 UTC m=+864.778818622" lastFinishedPulling="2026-02-18 19:32:06.427809155 +0000 UTC m=+886.132741820" observedRunningTime="2026-02-18 19:32:12.025278139 +0000 UTC m=+891.730210814" watchObservedRunningTime="2026-02-18 19:32:12.04684801 +0000 UTC m=+891.751780675" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.048490 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" podStartSLOduration=6.544076178 podStartE2EDuration="28.048484251s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:44.909219677 +0000 UTC m=+864.614152342" lastFinishedPulling="2026-02-18 19:32:06.41362774 +0000 UTC m=+886.118560415" observedRunningTime="2026-02-18 19:32:12.044107561 +0000 UTC m=+891.749040236" watchObservedRunningTime="2026-02-18 19:32:12.048484251 +0000 UTC m=+891.753416906" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.082514 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" podStartSLOduration=28.082499434 podStartE2EDuration="28.082499434s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:32:12.079031517 +0000 UTC m=+891.783964182" watchObservedRunningTime="2026-02-18 19:32:12.082499434 +0000 UTC m=+891.787432099" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.181840 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" podStartSLOduration=7.294289407 podStartE2EDuration="28.181824156s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.532519642 +0000 UTC m=+865.237452307" lastFinishedPulling="2026-02-18 19:32:06.420054381 +0000 UTC m=+886.124987056" observedRunningTime="2026-02-18 19:32:12.145252029 +0000 UTC m=+891.850184694" watchObservedRunningTime="2026-02-18 19:32:12.181824156 +0000 UTC m=+891.886756821" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.197239 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" podStartSLOduration=7.490436477 podStartE2EDuration="28.197222702s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.72102208 +0000 UTC m=+865.425954745" lastFinishedPulling="2026-02-18 19:32:06.427808315 +0000 UTC m=+886.132740970" observedRunningTime="2026-02-18 19:32:12.179463127 +0000 UTC m=+891.884395792" watchObservedRunningTime="2026-02-18 19:32:12.197222702 +0000 UTC m=+891.902155367" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.274106 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" podStartSLOduration=8.837003958 podStartE2EDuration="28.274091031s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:44.992895076 +0000 UTC m=+864.697827741" lastFinishedPulling="2026-02-18 19:32:04.429982149 +0000 UTC m=+884.134914814" observedRunningTime="2026-02-18 19:32:12.22700994 +0000 UTC m=+891.931942605" watchObservedRunningTime="2026-02-18 19:32:12.274091031 +0000 UTC m=+891.979023686" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.914939 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" event={"ID":"250062ed-a35d-489a-a6b5-e6f96d1532d6","Type":"ContainerStarted","Data":"f1dd92b6986761456a3e1ceded0cd4cf772b850a7f5f50fb56434c23cb331ecc"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.915992 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.924219 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" event={"ID":"6618726f-c93c-4d05-b6d9-a08aca84801f","Type":"ContainerStarted","Data":"178a8abd8da6dd29444ee8dd3b30ed9484c2e3123d1c655c9bbf99d410cc2433"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.924697 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.935132 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" event={"ID":"a65b16e4-f55f-427a-a629-2fbff014a7af","Type":"ContainerStarted","Data":"59fc3f13648d1b366f82c5928cd6cd4b94a82e04de46f3aeb21b7751df9a5d87"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.935650 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.945681 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" event={"ID":"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7","Type":"ContainerStarted","Data":"b0e5cc17d5708a2bf67f2c62fdedb963fde1c3e9e426935ccb4895be0efefc73"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.946394 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.946715 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" podStartSLOduration=4.166554668 podStartE2EDuration="28.946702034s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:46.031218741 +0000 UTC m=+865.736151406" lastFinishedPulling="2026-02-18 19:32:10.811366107 +0000 UTC m=+890.516298772" observedRunningTime="2026-02-18 19:32:12.940830566 +0000 UTC m=+892.645763231" watchObservedRunningTime="2026-02-18 19:32:12.946702034 +0000 UTC m=+892.651634699" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.961378 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" podStartSLOduration=4.230936423 podStartE2EDuration="28.961363551s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.920252908 +0000 UTC m=+865.625185573" lastFinishedPulling="2026-02-18 19:32:10.650680036 +0000 UTC m=+890.355612701" observedRunningTime="2026-02-18 19:32:12.95972899 +0000 UTC m=+892.664661655" watchObservedRunningTime="2026-02-18 19:32:12.961363551 +0000 UTC m=+892.666296216" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.980485 4942 generic.go:334] "Generic (PLEG): container finished" podID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerID="d3a4dd0670baf5152526b789369cfc767a3b4f746a7ebf6b7d9421c87331aa1b" exitCode=0 Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.980559 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerDied","Data":"d3a4dd0670baf5152526b789369cfc767a3b4f746a7ebf6b7d9421c87331aa1b"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.980584 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerStarted","Data":"ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.983199 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" podStartSLOduration=4.004905683 podStartE2EDuration="28.983183849s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.911362025 +0000 UTC m=+865.616294680" lastFinishedPulling="2026-02-18 19:32:10.889640181 +0000 UTC m=+890.594572846" observedRunningTime="2026-02-18 19:32:12.981800244 +0000 UTC m=+892.686732909" watchObservedRunningTime="2026-02-18 19:32:12.983183849 +0000 UTC m=+892.688116514" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.991006 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" event={"ID":"0f7a5f35-f6e0-4f17-a380-13e8718ba658","Type":"ContainerStarted","Data":"7c1c4b59848fdb8ca9d69d3cf3df8dbf093a478293a91ec3ec32fdba3d691720"} Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.006908 4942 generic.go:334] "Generic (PLEG): container finished" podID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerID="c0f81453e7d6dc223b51f45fabda95b58d634396c8127da2162b425bed1f7043" exitCode=0 Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.006968 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerDied","Data":"c0f81453e7d6dc223b51f45fabda95b58d634396c8127da2162b425bed1f7043"} Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.029477 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" event={"ID":"2fda65c9-97fe-4689-bd35-7f7974841223","Type":"ContainerStarted","Data":"cb32a9d94b49ddf033562e8d3ce5669858dce2299b0f583f194dc4d58e683e47"} Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.029861 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.049185 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" podStartSLOduration=7.784979906 podStartE2EDuration="29.049167584s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.705977763 +0000 UTC m=+865.410910428" lastFinishedPulling="2026-02-18 19:32:06.970165451 +0000 UTC m=+886.675098106" observedRunningTime="2026-02-18 19:32:13.049121843 +0000 UTC m=+892.754054508" watchObservedRunningTime="2026-02-18 19:32:13.049167584 +0000 UTC m=+892.754100249" Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.085198 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" podStartSLOduration=4.324620252 podStartE2EDuration="29.085179417s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.926638598 +0000 UTC m=+865.631571263" lastFinishedPulling="2026-02-18 19:32:10.687197773 +0000 UTC m=+890.392130428" observedRunningTime="2026-02-18 19:32:13.079056514 +0000 UTC m=+892.783989179" watchObservedRunningTime="2026-02-18 19:32:13.085179417 +0000 UTC m=+892.790112082" Feb 18 19:32:13 crc kubenswrapper[4942]: E0218 19:32:13.488915 4942 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04adf08c_4b6b_49c1_be25_d2cc8c67dce2.slice/crio-ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04adf08c_4b6b_49c1_be25_d2cc8c67dce2.slice/crio-conmon-ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.043260 4942 generic.go:334] "Generic (PLEG): container finished" podID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerID="ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286" exitCode=0 Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.043356 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerDied","Data":"ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286"} Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.054547 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" event={"ID":"9d43a851-2d6c-4fe9-86e1-04c7d382b257","Type":"ContainerStarted","Data":"d3e41fbd7f44dec587c3a844fafd8f7ac654e355eeca4863ea82863df532edd5"} Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.106751 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" podStartSLOduration=3.068937553 podStartE2EDuration="30.106731074s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.691686894 +0000 UTC m=+865.396619559" lastFinishedPulling="2026-02-18 19:32:12.729480415 +0000 UTC m=+892.434413080" observedRunningTime="2026-02-18 19:32:14.100563619 +0000 UTC m=+893.805496284" watchObservedRunningTime="2026-02-18 19:32:14.106731074 +0000 UTC m=+893.811663739" Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.792416 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:32:15 crc kubenswrapper[4942]: I0218 19:32:15.065051 4942 generic.go:334] "Generic (PLEG): container finished" podID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerID="15e62c1b1fb7cf60b372e3cd480a3c4a8ecacdad111be63f1be8804883bfdafd" exitCode=0 Feb 18 19:32:15 crc kubenswrapper[4942]: I0218 19:32:15.065825 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerDied","Data":"15e62c1b1fb7cf60b372e3cd480a3c4a8ecacdad111be63f1be8804883bfdafd"} Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.071882 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" event={"ID":"716e0e70-0ef0-4843-9ad3-d84f47a3397f","Type":"ContainerStarted","Data":"a47c27420db7e4a7c48bc06d00cef1967f56d185f72e2e98c675f3e7e030695c"} Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.072231 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.074863 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerStarted","Data":"c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35"} Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.105356 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" podStartSLOduration=27.914395288 podStartE2EDuration="32.105333379s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:32:11.172479586 +0000 UTC m=+890.877412251" lastFinishedPulling="2026-02-18 19:32:15.363417677 +0000 UTC m=+895.068350342" observedRunningTime="2026-02-18 19:32:16.102149209 +0000 UTC m=+895.807081884" watchObservedRunningTime="2026-02-18 19:32:16.105333379 +0000 UTC m=+895.810266054" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.137706 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xhp5w" podStartSLOduration=5.5066826760000005 podStartE2EDuration="9.13768121s" podCreationTimestamp="2026-02-18 19:32:07 +0000 UTC" firstStartedPulling="2026-02-18 19:32:11.731868009 +0000 UTC m=+891.436800674" lastFinishedPulling="2026-02-18 19:32:15.362866543 +0000 UTC m=+895.067799208" observedRunningTime="2026-02-18 19:32:16.13010292 +0000 UTC m=+895.835035585" watchObservedRunningTime="2026-02-18 19:32:16.13768121 +0000 UTC m=+895.842613885" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.185481 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.191955 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.369144 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.592870 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt"] Feb 18 19:32:16 crc kubenswrapper[4942]: W0218 19:32:16.618132 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod230a2167_e078_48a6_93ce_84a37ff4ac02.slice/crio-1ab1536a8d4d7eba054845abb071cf879edb63e81a4f3e21e754ff08d0a88010 WatchSource:0}: Error finding container 1ab1536a8d4d7eba054845abb071cf879edb63e81a4f3e21e754ff08d0a88010: Status 404 returned error can't find the container with id 1ab1536a8d4d7eba054845abb071cf879edb63e81a4f3e21e754ff08d0a88010 Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.084568 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerStarted","Data":"c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe"} Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.086077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" event={"ID":"230a2167-e078-48a6-93ce-84a37ff4ac02","Type":"ContainerStarted","Data":"1ab1536a8d4d7eba054845abb071cf879edb63e81a4f3e21e754ff08d0a88010"} Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.109101 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pmrtb" podStartSLOduration=13.434142697 podStartE2EDuration="16.109080318s" podCreationTimestamp="2026-02-18 19:32:01 +0000 UTC" firstStartedPulling="2026-02-18 19:32:13.016275559 +0000 UTC m=+892.721208224" lastFinishedPulling="2026-02-18 19:32:15.69121317 +0000 UTC m=+895.396145845" observedRunningTime="2026-02-18 19:32:17.105694803 +0000 UTC m=+896.810627478" watchObservedRunningTime="2026-02-18 19:32:17.109080318 +0000 UTC m=+896.814012983" Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.405797 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.406009 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:18 crc kubenswrapper[4942]: I0218 19:32:18.097344 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" event={"ID":"a15b8ac2-0742-4fd7-9a14-005620c93a3d","Type":"ContainerStarted","Data":"0ad1cbaaf404d743218b61f9fddd7c41c34d9cb82a66025522f9402630a85f0f"} Feb 18 19:32:18 crc kubenswrapper[4942]: I0218 19:32:18.098136 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:32:18 crc kubenswrapper[4942]: I0218 19:32:18.115312 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" podStartSLOduration=2.181942753 podStartE2EDuration="34.115294519s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.521871615 +0000 UTC m=+865.226804280" lastFinishedPulling="2026-02-18 19:32:17.455223381 +0000 UTC m=+897.160156046" observedRunningTime="2026-02-18 19:32:18.112201261 +0000 UTC m=+897.817133936" watchObservedRunningTime="2026-02-18 19:32:18.115294519 +0000 UTC m=+897.820227184" Feb 18 19:32:18 crc kubenswrapper[4942]: I0218 19:32:18.486255 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xhp5w" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" probeResult="failure" output=< Feb 18 19:32:18 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:32:18 crc kubenswrapper[4942]: > Feb 18 19:32:19 crc kubenswrapper[4942]: I0218 19:32:19.104911 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" event={"ID":"230a2167-e078-48a6-93ce-84a37ff4ac02","Type":"ContainerStarted","Data":"e22ee529428539d8e11ca7e2fdd7467105524ef09980bcb95ea88287e14f1ae0"} Feb 18 19:32:19 crc kubenswrapper[4942]: I0218 19:32:19.105163 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:19 crc kubenswrapper[4942]: I0218 19:32:19.126170 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" podStartSLOduration=32.813120143 podStartE2EDuration="35.126150656s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:32:16.620227195 +0000 UTC m=+896.325159870" lastFinishedPulling="2026-02-18 19:32:18.933257718 +0000 UTC m=+898.638190383" observedRunningTime="2026-02-18 19:32:19.119387547 +0000 UTC m=+898.824320202" watchObservedRunningTime="2026-02-18 19:32:19.126150656 +0000 UTC m=+898.831083321" Feb 18 19:32:20 crc kubenswrapper[4942]: I0218 19:32:20.858915 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:20 crc kubenswrapper[4942]: I0218 19:32:20.933636 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.260607 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.264402 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.271636 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.273986 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.274042 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.274076 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375483 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375553 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375961 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375974 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.392842 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.591973 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.924999 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:22 crc kubenswrapper[4942]: I0218 19:32:22.071885 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:22 crc kubenswrapper[4942]: I0218 19:32:22.071934 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:22 crc kubenswrapper[4942]: I0218 19:32:22.148147 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerStarted","Data":"256b7d08b3f9eb072fd2995b03828975c96e7e95fda87c10cfd7a170fdd2595a"} Feb 18 19:32:23 crc kubenswrapper[4942]: I0218 19:32:23.117544 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pmrtb" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" probeResult="failure" output=< Feb 18 19:32:23 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:32:23 crc kubenswrapper[4942]: > Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.374122 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.388996 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.442441 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.485646 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.516721 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.608696 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.757521 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.792977 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.793392 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.840686 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.892565 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.057657 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.116250 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.123570 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.203049 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.220663 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.253832 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:32:26 crc kubenswrapper[4942]: I0218 19:32:26.375455 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:27 crc kubenswrapper[4942]: I0218 19:32:27.465905 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:27 crc kubenswrapper[4942]: I0218 19:32:27.513811 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:27 crc kubenswrapper[4942]: I0218 19:32:27.699945 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:29 crc kubenswrapper[4942]: I0218 19:32:29.194271 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xhp5w" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" containerID="cri-o://c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35" gracePeriod=2 Feb 18 19:32:32 crc kubenswrapper[4942]: I0218 19:32:32.121152 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:32 crc kubenswrapper[4942]: I0218 19:32:32.183534 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:33 crc kubenswrapper[4942]: I0218 19:32:33.283438 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:33 crc kubenswrapper[4942]: I0218 19:32:33.283696 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pmrtb" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" containerID="cri-o://c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe" gracePeriod=2 Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.246801 4942 generic.go:334] "Generic (PLEG): container finished" podID="078b3d71-94dd-42b8-8804-84590a8abe44" containerID="293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279" exitCode=0 Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.247040 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerDied","Data":"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279"} Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.252675 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pmrtb_a4cd7c2a-4d5f-48c2-9af4-bcc237367416/registry-server/0.log" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.254238 4942 generic.go:334] "Generic (PLEG): container finished" podID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerID="c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe" exitCode=137 Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.254298 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerDied","Data":"c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe"} Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.256599 4942 generic.go:334] "Generic (PLEG): container finished" podID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerID="c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35" exitCode=0 Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.256663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerDied","Data":"c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35"} Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.798334 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.829020 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pmrtb_a4cd7c2a-4d5f-48c2-9af4-bcc237367416/registry-server/0.log" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.830465 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.925504 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") pod \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.925629 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") pod \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.925703 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") pod \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.926650 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities" (OuterVolumeSpecName: "utilities") pod "04adf08c-4b6b-49c1-be25-d2cc8c67dce2" (UID: "04adf08c-4b6b-49c1-be25-d2cc8c67dce2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.930849 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557" (OuterVolumeSpecName: "kube-api-access-tm557") pod "04adf08c-4b6b-49c1-be25-d2cc8c67dce2" (UID: "04adf08c-4b6b-49c1-be25-d2cc8c67dce2"). InnerVolumeSpecName "kube-api-access-tm557". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.981343 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04adf08c-4b6b-49c1-be25-d2cc8c67dce2" (UID: "04adf08c-4b6b-49c1-be25-d2cc8c67dce2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.026894 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") pod \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027116 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") pod \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027249 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") pod \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027568 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027645 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027701 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities" (OuterVolumeSpecName: "utilities") pod "a4cd7c2a-4d5f-48c2-9af4-bcc237367416" (UID: "a4cd7c2a-4d5f-48c2-9af4-bcc237367416"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027712 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.030394 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m" (OuterVolumeSpecName: "kube-api-access-ksk4m") pod "a4cd7c2a-4d5f-48c2-9af4-bcc237367416" (UID: "a4cd7c2a-4d5f-48c2-9af4-bcc237367416"). InnerVolumeSpecName "kube-api-access-ksk4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.129067 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.129339 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.172018 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4cd7c2a-4d5f-48c2-9af4-bcc237367416" (UID: "a4cd7c2a-4d5f-48c2-9af4-bcc237367416"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.230950 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.267111 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pmrtb_a4cd7c2a-4d5f-48c2-9af4-bcc237367416/registry-server/0.log" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.267747 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerDied","Data":"dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b"} Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.267805 4942 scope.go:117] "RemoveContainer" containerID="c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.267931 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.281435 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerDied","Data":"5e3cbf30742449f377e56eae72b81f7947e381700f22252f373a197aae1f9b45"} Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.281566 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.285054 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" event={"ID":"11715b33-f996-46bf-81db-0557e84e7fea","Type":"ContainerStarted","Data":"c44555440393e73229dace887739ba537997ccb145b340ded9403ca08555b404"} Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.285617 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.291137 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" event={"ID":"cde9a09e-2dfe-410e-95ad-8f297b517ef4","Type":"ContainerStarted","Data":"77c02583ef350bf347f4c26bf4f780abdaad82069f189f920d5de32d851b2698"} Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.291929 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.307456 4942 scope.go:117] "RemoveContainer" containerID="15e62c1b1fb7cf60b372e3cd480a3c4a8ecacdad111be63f1be8804883bfdafd" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.317068 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" podStartSLOduration=2.197664942 podStartE2EDuration="53.317052718s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.461950282 +0000 UTC m=+865.166882947" lastFinishedPulling="2026-02-18 19:32:36.581338038 +0000 UTC m=+916.286270723" observedRunningTime="2026-02-18 19:32:37.315332134 +0000 UTC m=+917.020264799" watchObservedRunningTime="2026-02-18 19:32:37.317052718 +0000 UTC m=+917.021985383" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.327599 4942 scope.go:117] "RemoveContainer" containerID="c0f81453e7d6dc223b51f45fabda95b58d634396c8127da2162b425bed1f7043" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.342987 4942 scope.go:117] "RemoveContainer" containerID="c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.348362 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" podStartSLOduration=2.478352544 podStartE2EDuration="53.34832778s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.704460375 +0000 UTC m=+865.409393040" lastFinishedPulling="2026-02-18 19:32:36.574435601 +0000 UTC m=+916.279368276" observedRunningTime="2026-02-18 19:32:37.333988422 +0000 UTC m=+917.038921117" watchObservedRunningTime="2026-02-18 19:32:37.34832778 +0000 UTC m=+917.053260435" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.352252 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.359057 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.360948 4942 scope.go:117] "RemoveContainer" containerID="ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.371612 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.374908 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.384547 4942 scope.go:117] "RemoveContainer" containerID="d3a4dd0670baf5152526b789369cfc767a3b4f746a7ebf6b7d9421c87331aa1b" Feb 18 19:32:38 crc kubenswrapper[4942]: I0218 19:32:38.303610 4942 generic.go:334] "Generic (PLEG): container finished" podID="078b3d71-94dd-42b8-8804-84590a8abe44" containerID="db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e" exitCode=0 Feb 18 19:32:38 crc kubenswrapper[4942]: I0218 19:32:38.303928 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerDied","Data":"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e"} Feb 18 19:32:39 crc kubenswrapper[4942]: I0218 19:32:39.045381 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" path="/var/lib/kubelet/pods/04adf08c-4b6b-49c1-be25-d2cc8c67dce2/volumes" Feb 18 19:32:39 crc kubenswrapper[4942]: I0218 19:32:39.046654 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" path="/var/lib/kubelet/pods/a4cd7c2a-4d5f-48c2-9af4-bcc237367416/volumes" Feb 18 19:32:39 crc kubenswrapper[4942]: I0218 19:32:39.322030 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerStarted","Data":"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a"} Feb 18 19:32:39 crc kubenswrapper[4942]: I0218 19:32:39.351531 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dh7gx" podStartSLOduration=16.11519219 podStartE2EDuration="18.351506436s" podCreationTimestamp="2026-02-18 19:32:21 +0000 UTC" firstStartedPulling="2026-02-18 19:32:36.504319823 +0000 UTC m=+916.209252488" lastFinishedPulling="2026-02-18 19:32:38.740634069 +0000 UTC m=+918.445566734" observedRunningTime="2026-02-18 19:32:39.351430124 +0000 UTC m=+919.056362799" watchObservedRunningTime="2026-02-18 19:32:39.351506436 +0000 UTC m=+919.056439111" Feb 18 19:32:41 crc kubenswrapper[4942]: I0218 19:32:41.592840 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:41 crc kubenswrapper[4942]: I0218 19:32:41.594160 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:41 crc kubenswrapper[4942]: I0218 19:32:41.652484 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:44 crc kubenswrapper[4942]: I0218 19:32:44.750739 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:32:44 crc kubenswrapper[4942]: I0218 19:32:44.855135 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:32:51 crc kubenswrapper[4942]: I0218 19:32:51.639917 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:51 crc kubenswrapper[4942]: I0218 19:32:51.685836 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.428620 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dh7gx" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="registry-server" containerID="cri-o://39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" gracePeriod=2 Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.893638 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.998889 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") pod \"078b3d71-94dd-42b8-8804-84590a8abe44\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.998944 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") pod \"078b3d71-94dd-42b8-8804-84590a8abe44\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.999088 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") pod \"078b3d71-94dd-42b8-8804-84590a8abe44\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.000137 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities" (OuterVolumeSpecName: "utilities") pod "078b3d71-94dd-42b8-8804-84590a8abe44" (UID: "078b3d71-94dd-42b8-8804-84590a8abe44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.004936 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw" (OuterVolumeSpecName: "kube-api-access-zbnpw") pod "078b3d71-94dd-42b8-8804-84590a8abe44" (UID: "078b3d71-94dd-42b8-8804-84590a8abe44"). InnerVolumeSpecName "kube-api-access-zbnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.026795 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "078b3d71-94dd-42b8-8804-84590a8abe44" (UID: "078b3d71-94dd-42b8-8804-84590a8abe44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.100657 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.100698 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.100712 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.435879 4942 generic.go:334] "Generic (PLEG): container finished" podID="078b3d71-94dd-42b8-8804-84590a8abe44" containerID="39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" exitCode=0 Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.435931 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerDied","Data":"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a"} Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.435970 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerDied","Data":"256b7d08b3f9eb072fd2995b03828975c96e7e95fda87c10cfd7a170fdd2595a"} Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.435997 4942 scope.go:117] "RemoveContainer" containerID="39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.436043 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.458589 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.471901 4942 scope.go:117] "RemoveContainer" containerID="db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.472078 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.490407 4942 scope.go:117] "RemoveContainer" containerID="293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.517922 4942 scope.go:117] "RemoveContainer" containerID="39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" Feb 18 19:32:53 crc kubenswrapper[4942]: E0218 19:32:53.518290 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a\": container with ID starting with 39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a not found: ID does not exist" containerID="39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518342 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a"} err="failed to get container status \"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a\": rpc error: code = NotFound desc = could not find container \"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a\": container with ID starting with 39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a not found: ID does not exist" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518361 4942 scope.go:117] "RemoveContainer" containerID="db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e" Feb 18 19:32:53 crc kubenswrapper[4942]: E0218 19:32:53.518551 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e\": container with ID starting with db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e not found: ID does not exist" containerID="db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518570 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e"} err="failed to get container status \"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e\": rpc error: code = NotFound desc = could not find container \"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e\": container with ID starting with db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e not found: ID does not exist" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518581 4942 scope.go:117] "RemoveContainer" containerID="293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279" Feb 18 19:32:53 crc kubenswrapper[4942]: E0218 19:32:53.518818 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279\": container with ID starting with 293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279 not found: ID does not exist" containerID="293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518842 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279"} err="failed to get container status \"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279\": rpc error: code = NotFound desc = could not find container \"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279\": container with ID starting with 293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279 not found: ID does not exist" Feb 18 19:32:55 crc kubenswrapper[4942]: I0218 19:32:55.047266 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" path="/var/lib/kubelet/pods/078b3d71-94dd-42b8-8804-84590a8abe44/volumes" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.669378 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.672946 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673103 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673170 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673228 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673300 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673359 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673433 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673492 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673552 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673603 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673662 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673724 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673809 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673875 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673935 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673991 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.674047 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.674095 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.674302 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.674363 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.674424 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.675467 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.679077 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.679107 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.679094 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.679499 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kcnds" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.685747 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.746515 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.747632 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.756126 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.759422 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.847680 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.847755 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949528 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949635 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949684 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949729 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949883 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.950902 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.981742 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.996943 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.056925 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.057025 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.057063 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.058407 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.059136 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.085922 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.360694 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.506722 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:03 crc kubenswrapper[4942]: W0218 19:33:03.824221 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fc86b17_5060_4828_9a92_7e40170ea226.slice/crio-e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3 WatchSource:0}: Error finding container e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3: Status 404 returned error can't find the container with id e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3 Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.825160 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:04 crc kubenswrapper[4942]: I0218 19:33:04.521150 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" event={"ID":"9fc86b17-5060-4828-9a92-7e40170ea226","Type":"ContainerStarted","Data":"e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3"} Feb 18 19:33:04 crc kubenswrapper[4942]: I0218 19:33:04.522746 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" event={"ID":"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67","Type":"ContainerStarted","Data":"154555e460749d363c1de1a0b1abd6b993012b64a0c3c4566686feeefaddd1c6"} Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.504914 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.534257 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.535802 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.561603 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.697787 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.697881 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.697903 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.752130 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.779362 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.789837 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.799580 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.799617 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.799664 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.800623 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.809508 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.814766 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.822093 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.862692 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.900456 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.900507 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.900632 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.002058 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.002392 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.002447 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.003125 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.003274 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.032889 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.115366 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.377686 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:06 crc kubenswrapper[4942]: W0218 19:33:06.384013 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb34cdd67_e888_4718_8889_0dc284187fcc.slice/crio-2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3 WatchSource:0}: Error finding container 2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3: Status 404 returned error can't find the container with id 2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3 Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.542940 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerStarted","Data":"2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3"} Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.628731 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.660168 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.661342 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.663523 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.663704 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jnzzx" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.664299 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.664365 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.664469 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.665601 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.665915 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.682708 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.815678 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.815717 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.815959 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816015 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816032 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816093 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816123 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816338 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816403 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816461 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816514 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918144 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918198 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918443 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918485 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918522 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918554 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918577 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918603 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918628 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918648 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918705 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918724 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.919508 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.919886 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.919980 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.920554 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.924897 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.925217 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.936218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.936835 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.936915 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.939641 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.943706 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.943952 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wp8g5" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.944061 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.944935 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.946083 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.946187 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.946301 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.949682 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.965076 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.974192 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.987196 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120877 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120921 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120946 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120964 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120992 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121040 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121068 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121086 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121258 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121372 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121399 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225493 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225546 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225573 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225589 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225611 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225628 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225652 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225680 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225705 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225724 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225754 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.226146 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.226492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.227484 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.228142 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.228469 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.228478 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.230988 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.232977 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.233032 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.234158 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.252546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.253869 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.325486 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.168952 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.170721 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.174536 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.174701 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.174883 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-z58zg" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.175022 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.183588 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.187709 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.341739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.341865 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.341936 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342034 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342070 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-default\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342108 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqdfr\" (UniqueName: \"kubernetes.io/projected/e39270f2-0125-43f1-a2b3-cda4813614dd-kube-api-access-wqdfr\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342129 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342171 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-kolla-config\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.444906 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.444960 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445030 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445058 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445089 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-default\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445125 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqdfr\" (UniqueName: \"kubernetes.io/projected/e39270f2-0125-43f1-a2b3-cda4813614dd-kube-api-access-wqdfr\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445146 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445178 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-kolla-config\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445925 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.446100 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-kolla-config\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.447079 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-default\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.449452 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.449549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.451414 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.454160 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.477907 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.481805 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqdfr\" (UniqueName: \"kubernetes.io/projected/e39270f2-0125-43f1-a2b3-cda4813614dd-kube-api-access-wqdfr\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.500493 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: W0218 19:33:09.521134 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7887418_e8d9_434c_a8e3_fed787cbc8c8.slice/crio-56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f WatchSource:0}: Error finding container 56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f: Status 404 returned error can't find the container with id 56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.564519 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerStarted","Data":"56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f"} Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.810070 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.811458 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.814933 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.815155 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.815215 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.815165 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gxmcq" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.836527 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.846300 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.847303 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.849548 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.849730 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.849856 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ts7vg" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.854410 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.966641 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.966938 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.966956 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hvvh\" (UniqueName: \"kubernetes.io/projected/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kube-api-access-4hvvh\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.966972 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967001 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-kolla-config\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967020 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967047 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967062 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8fg9\" (UniqueName: \"kubernetes.io/projected/242ed220-c516-4f30-bb5b-69f28626101a-kube-api-access-w8fg9\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967078 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967100 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967121 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-config-data\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967160 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967190 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068619 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068661 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8fg9\" (UniqueName: \"kubernetes.io/projected/242ed220-c516-4f30-bb5b-69f28626101a-kube-api-access-w8fg9\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068680 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068723 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-config-data\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068778 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068807 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068843 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068856 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068871 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hvvh\" (UniqueName: \"kubernetes.io/projected/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kube-api-access-4hvvh\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068885 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068916 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-kolla-config\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.069102 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.069433 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070285 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-kolla-config\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070355 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070370 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070642 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070958 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-config-data\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.071128 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.077441 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.077726 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.080133 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.089230 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.092254 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8fg9\" (UniqueName: \"kubernetes.io/projected/242ed220-c516-4f30-bb5b-69f28626101a-kube-api-access-w8fg9\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.095059 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hvvh\" (UniqueName: \"kubernetes.io/projected/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kube-api-access-4hvvh\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.104576 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.180973 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.193173 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.209013 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.209948 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.212804 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pblt6" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.225818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.304958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") pod \"kube-state-metrics-0\" (UID: \"a8f1712c-12df-4ca2-81d3-dc649c747868\") " pod="openstack/kube-state-metrics-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.406625 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") pod \"kube-state-metrics-0\" (UID: \"a8f1712c-12df-4ca2-81d3-dc649c747868\") " pod="openstack/kube-state-metrics-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.433019 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") pod \"kube-state-metrics-0\" (UID: \"a8f1712c-12df-4ca2-81d3-dc649c747868\") " pod="openstack/kube-state-metrics-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.534026 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.352470 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.355072 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.360350 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.360888 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.361191 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.361415 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.361563 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.362118 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7f4m2" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.362498 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.375951 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.377701 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.425884 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.425994 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426031 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426073 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426299 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426418 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426525 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426584 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426745 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528100 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528845 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528907 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528931 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528979 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529006 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529032 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529075 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529115 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529146 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529755 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.530096 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.530157 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.535585 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.538500 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.539097 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.539116 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.554846 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.565445 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.591829 4942 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.591871 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70b345b463ff13ff33bce45da0f4a8796a1574afa2d8fd2ecf4f2239b34767fb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.696944 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.976818 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.662559 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-llsph"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.663822 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.670400 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-llsph"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.672039 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.672273 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.672490 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2t4hx" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.677519 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7xrn9"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.679513 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.733781 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7xrn9"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767592 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-etc-ovs\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767637 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-combined-ca-bundle\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767664 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-log\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767692 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767708 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767737 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-run\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767918 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxmr6\" (UniqueName: \"kubernetes.io/projected/a740e80f-15e5-4745-bb1d-96da2561f33b-kube-api-access-dxmr6\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767978 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhphp\" (UniqueName: \"kubernetes.io/projected/28fe292c-6cda-4e3b-bce3-544ded95930b-kube-api-access-mhphp\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768080 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a740e80f-15e5-4745-bb1d-96da2561f33b-scripts\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768117 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-lib\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768243 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-log-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768324 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-ovn-controller-tls-certs\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768419 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28fe292c-6cda-4e3b-bce3-544ded95930b-scripts\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.856633 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.859587 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.862488 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6dh9g" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.863206 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.863356 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.863388 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.863365 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870261 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-log-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870308 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-ovn-controller-tls-certs\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870345 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28fe292c-6cda-4e3b-bce3-544ded95930b-scripts\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870393 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-etc-ovs\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870422 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-combined-ca-bundle\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870443 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-log\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870469 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870483 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870511 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-run\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870530 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxmr6\" (UniqueName: \"kubernetes.io/projected/a740e80f-15e5-4745-bb1d-96da2561f33b-kube-api-access-dxmr6\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870565 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhphp\" (UniqueName: \"kubernetes.io/projected/28fe292c-6cda-4e3b-bce3-544ded95930b-kube-api-access-mhphp\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870585 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a740e80f-15e5-4745-bb1d-96da2561f33b-scripts\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870613 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-lib\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871108 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871174 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-lib\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871270 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871297 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-log\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871308 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-run\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871901 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-etc-ovs\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.875798 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-log-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.877064 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a740e80f-15e5-4745-bb1d-96da2561f33b-scripts\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.877353 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-combined-ca-bundle\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.881560 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-ovn-controller-tls-certs\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.882390 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28fe292c-6cda-4e3b-bce3-544ded95930b-scripts\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.885744 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.895413 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxmr6\" (UniqueName: \"kubernetes.io/projected/a740e80f-15e5-4745-bb1d-96da2561f33b-kube-api-access-dxmr6\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.900246 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhphp\" (UniqueName: \"kubernetes.io/projected/28fe292c-6cda-4e3b-bce3-544ded95930b-kube-api-access-mhphp\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.972502 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.972837 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973006 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-config\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973112 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973215 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973297 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lwc4\" (UniqueName: \"kubernetes.io/projected/b9c56d4c-8421-4b07-992d-c0c45223259f-kube-api-access-5lwc4\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973430 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973524 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.028241 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.035054 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075214 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075292 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075363 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075384 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075411 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-config\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075436 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075475 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075496 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lwc4\" (UniqueName: \"kubernetes.io/projected/b9c56d4c-8421-4b07-992d-c0c45223259f-kube-api-access-5lwc4\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.077228 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.077685 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-config\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.078026 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.080266 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.081123 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.082000 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.089095 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.094704 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lwc4\" (UniqueName: \"kubernetes.io/projected/b9c56d4c-8421-4b07-992d-c0c45223259f-kube-api-access-5lwc4\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.108809 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.174415 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.427454 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.561073 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.562735 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.562845 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.572171 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6ghpq" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.572395 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.572558 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.572649 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.634978 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635060 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635117 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635176 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635195 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635224 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-247h5\" (UniqueName: \"kubernetes.io/projected/4a1f9573-3ebf-4dbf-a269-938392cbd141-kube-api-access-247h5\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.737512 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.737579 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738007 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738060 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738100 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-247h5\" (UniqueName: \"kubernetes.io/projected/4a1f9573-3ebf-4dbf-a269-938392cbd141-kube-api-access-247h5\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738115 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738143 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738193 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738398 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738658 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738696 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.739140 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.746993 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.747604 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.766457 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.770637 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-247h5\" (UniqueName: \"kubernetes.io/projected/4a1f9573-3ebf-4dbf-a269-938392cbd141-kube-api-access-247h5\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.784308 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.892349 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:20 crc kubenswrapper[4942]: W0218 19:33:20.103612 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77de5cb0_e446_407d_9e32_b13f39c84ae2.slice/crio-f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf WatchSource:0}: Error finding container f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf: Status 404 returned error can't find the container with id f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.103667 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.103945 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnnbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-b2r8r_openstack(73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.105125 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" podUID="73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.123526 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.123693 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxlmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vvvkv_openstack(9fc86b17-5060-4828-9a92-7e40170ea226): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.125149 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" podUID="9fc86b17-5060-4828-9a92-7e40170ea226" Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.593574 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:33:20 crc kubenswrapper[4942]: W0218 19:33:20.606066 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6b41292_c562_4964_bb25_d8945415b3da.slice/crio-dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3 WatchSource:0}: Error finding container dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3: Status 404 returned error can't find the container with id dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3 Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.667226 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerStarted","Data":"f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf"} Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.668616 4942 generic.go:334] "Generic (PLEG): container finished" podID="b34cdd67-e888-4718-8889-0dc284187fcc" containerID="031b9ea9109a76a2044d40e6de17d03777ea8f76aba5a0391d56eb6c10d14754" exitCode=0 Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.668663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerDied","Data":"031b9ea9109a76a2044d40e6de17d03777ea8f76aba5a0391d56eb6c10d14754"} Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.683515 4942 generic.go:334] "Generic (PLEG): container finished" podID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerID="756562a4164ba39c406456f5f9881491ae21aa337026dce4848f70b89d661fc0" exitCode=0 Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.683590 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerDied","Data":"756562a4164ba39c406456f5f9881491ae21aa337026dce4848f70b89d661fc0"} Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.686827 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerStarted","Data":"dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3"} Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.715483 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.084106 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.095084 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.149629 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.177454 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.308173 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-llsph"] Feb 18 19:33:21 crc kubenswrapper[4942]: W0218 19:33:21.321514 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28fe292c_6cda_4e3b_bce3_544ded95930b.slice/crio-bd8762695e07eaf1790db9c5f5764553cd568bf73835eb6d3d76426ce7570e2d WatchSource:0}: Error finding container bd8762695e07eaf1790db9c5f5764553cd568bf73835eb6d3d76426ce7570e2d: Status 404 returned error can't find the container with id bd8762695e07eaf1790db9c5f5764553cd568bf73835eb6d3d76426ce7570e2d Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.346498 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.350399 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.367223 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") pod \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.367271 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") pod \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.367302 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") pod \"9fc86b17-5060-4828-9a92-7e40170ea226\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.367907 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config" (OuterVolumeSpecName: "config") pod "73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67" (UID: "73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368009 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") pod \"9fc86b17-5060-4828-9a92-7e40170ea226\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368055 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") pod \"9fc86b17-5060-4828-9a92-7e40170ea226\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368399 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config" (OuterVolumeSpecName: "config") pod "9fc86b17-5060-4828-9a92-7e40170ea226" (UID: "9fc86b17-5060-4828-9a92-7e40170ea226"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368869 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368969 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fc86b17-5060-4828-9a92-7e40170ea226" (UID: "9fc86b17-5060-4828-9a92-7e40170ea226"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.373019 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd" (OuterVolumeSpecName: "kube-api-access-rxlmd") pod "9fc86b17-5060-4828-9a92-7e40170ea226" (UID: "9fc86b17-5060-4828-9a92-7e40170ea226"). InnerVolumeSpecName "kube-api-access-rxlmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.373122 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj" (OuterVolumeSpecName: "kube-api-access-hnnbj") pod "73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67" (UID: "73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67"). InnerVolumeSpecName "kube-api-access-hnnbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.470192 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.470225 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.470236 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.477454 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: W0218 19:33:21.483970 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9c56d4c_8421_4b07_992d_c0c45223259f.slice/crio-b8bb8ef9f0d862adc5d74e5a0677908d61b1712b9b37898f1d630b9bf520008b WatchSource:0}: Error finding container b8bb8ef9f0d862adc5d74e5a0677908d61b1712b9b37898f1d630b9bf520008b: Status 404 returned error can't find the container with id b8bb8ef9f0d862adc5d74e5a0677908d61b1712b9b37898f1d630b9bf520008b Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.695996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"1193c3f2b445b73f045913a6f677cad12654f417ef42c816b25977d36d83acd7"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.698747 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e39270f2-0125-43f1-a2b3-cda4813614dd","Type":"ContainerStarted","Data":"5de2930cf10c161baa496b8e34743f2af6f232c5ff9c029cd31649a3c4355fdc"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.702067 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerStarted","Data":"4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.703047 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.704148 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph" event={"ID":"28fe292c-6cda-4e3b-bce3-544ded95930b","Type":"ContainerStarted","Data":"bd8762695e07eaf1790db9c5f5764553cd568bf73835eb6d3d76426ce7570e2d"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.705840 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f1712c-12df-4ca2-81d3-dc649c747868","Type":"ContainerStarted","Data":"60cb4ff34d0b296ea32561c63d6c9eaa0072a589abe5d55659f37a97a3ea461d"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.706903 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"242ed220-c516-4f30-bb5b-69f28626101a","Type":"ContainerStarted","Data":"565fc44d6da4c257adace4097ad9ed890137eae6d9af423357d18bbc592b3fef"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.707365 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" event={"ID":"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67","Type":"ContainerDied","Data":"154555e460749d363c1de1a0b1abd6b993012b64a0c3c4566686feeefaddd1c6"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.707412 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.709957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e07db76c-5ab3-430d-b9ad-eba96f02ab9e","Type":"ContainerStarted","Data":"689297c4a8f7d9074ab9928fff44a71a5d276f927970e4ff0e03fa775cccf64e"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.711531 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" event={"ID":"9fc86b17-5060-4828-9a92-7e40170ea226","Type":"ContainerDied","Data":"e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.711548 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.714393 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerStarted","Data":"b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.714524 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.716177 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b9c56d4c-8421-4b07-992d-c0c45223259f","Type":"ContainerStarted","Data":"b8bb8ef9f0d862adc5d74e5a0677908d61b1712b9b37898f1d630b9bf520008b"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.721343 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" podStartSLOduration=2.847127291 podStartE2EDuration="16.72132791s" podCreationTimestamp="2026-02-18 19:33:05 +0000 UTC" firstStartedPulling="2026-02-18 19:33:06.386341715 +0000 UTC m=+946.091274380" lastFinishedPulling="2026-02-18 19:33:20.260542344 +0000 UTC m=+959.965474999" observedRunningTime="2026-02-18 19:33:21.720882468 +0000 UTC m=+961.425815133" watchObservedRunningTime="2026-02-18 19:33:21.72132791 +0000 UTC m=+961.426260575" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.755019 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" podStartSLOduration=6.014443434 podStartE2EDuration="16.754995603s" podCreationTimestamp="2026-02-18 19:33:05 +0000 UTC" firstStartedPulling="2026-02-18 19:33:09.527519458 +0000 UTC m=+949.232452133" lastFinishedPulling="2026-02-18 19:33:20.268071637 +0000 UTC m=+959.973004302" observedRunningTime="2026-02-18 19:33:21.752255223 +0000 UTC m=+961.457187888" watchObservedRunningTime="2026-02-18 19:33:21.754995603 +0000 UTC m=+961.459928298" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.819888 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.826524 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.869970 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.877525 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.974098 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7xrn9"] Feb 18 19:33:22 crc kubenswrapper[4942]: W0218 19:33:22.425172 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda740e80f_15e5_4745_bb1d_96da2561f33b.slice/crio-3904bfe0d6114f76dfb37e616e4252bd758e05c00fdd53ca0dacf10defd1460e WatchSource:0}: Error finding container 3904bfe0d6114f76dfb37e616e4252bd758e05c00fdd53ca0dacf10defd1460e: Status 404 returned error can't find the container with id 3904bfe0d6114f76dfb37e616e4252bd758e05c00fdd53ca0dacf10defd1460e Feb 18 19:33:22 crc kubenswrapper[4942]: I0218 19:33:22.543004 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:33:22 crc kubenswrapper[4942]: I0218 19:33:22.727807 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerStarted","Data":"3904bfe0d6114f76dfb37e616e4252bd758e05c00fdd53ca0dacf10defd1460e"} Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.044487 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67" path="/var/lib/kubelet/pods/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67/volumes" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.044851 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fc86b17-5060-4828-9a92-7e40170ea226" path="/var/lib/kubelet/pods/9fc86b17-5060-4828-9a92-7e40170ea226/volumes" Feb 18 19:33:23 crc kubenswrapper[4942]: W0218 19:33:23.208027 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a1f9573_3ebf_4dbf_a269_938392cbd141.slice/crio-2f16d88a6d4b2c87ebe8b7a480b25a7dbf5ae8a21bf20643a0b443083877e239 WatchSource:0}: Error finding container 2f16d88a6d4b2c87ebe8b7a480b25a7dbf5ae8a21bf20643a0b443083877e239: Status 404 returned error can't find the container with id 2f16d88a6d4b2c87ebe8b7a480b25a7dbf5ae8a21bf20643a0b443083877e239 Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.688255 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-99sfz"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.697485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.701307 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.705546 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-config\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706123 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlscn\" (UniqueName: \"kubernetes.io/projected/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-kube-api-access-jlscn\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706191 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706277 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovs-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706385 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovn-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706441 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-combined-ca-bundle\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.723166 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-99sfz"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.760625 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a1f9573-3ebf-4dbf-a269-938392cbd141","Type":"ContainerStarted","Data":"2f16d88a6d4b2c87ebe8b7a480b25a7dbf5ae8a21bf20643a0b443083877e239"} Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807581 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovn-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807641 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-combined-ca-bundle\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-config\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807726 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlscn\" (UniqueName: \"kubernetes.io/projected/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-kube-api-access-jlscn\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807827 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807885 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovs-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.808249 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovs-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.808635 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-config\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.808728 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovn-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.816908 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.819462 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.819678 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" containerID="cri-o://b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a" gracePeriod=10 Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.829861 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-combined-ca-bundle\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.830375 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlscn\" (UniqueName: \"kubernetes.io/projected/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-kube-api-access-jlscn\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.858651 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.868288 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.873035 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.888316 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.909384 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.909486 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.909558 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.909585 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.958011 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.999002 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.001328 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.006336 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010256 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010420 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010522 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010598 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010695 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010787 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010892 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010965 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.011063 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.012019 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010279 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.012808 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.013702 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.049224 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.066819 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.112794 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.112852 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.112922 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.112964 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.113009 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.114255 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.114255 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.114339 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.115399 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.128624 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.273057 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.375429 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.768321 4942 generic.go:334] "Generic (PLEG): container finished" podID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerID="b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a" exitCode=0 Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.768490 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" containerID="cri-o://4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc" gracePeriod=10 Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.768631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerDied","Data":"b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a"} Feb 18 19:33:24 crc kubenswrapper[4942]: E0218 19:33:24.936188 4942 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb34cdd67_e888_4718_8889_0dc284187fcc.slice/crio-4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb34cdd67_e888_4718_8889_0dc284187fcc.slice/crio-conmon-4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:33:25 crc kubenswrapper[4942]: I0218 19:33:25.779323 4942 generic.go:334] "Generic (PLEG): container finished" podID="b34cdd67-e888-4718-8889-0dc284187fcc" containerID="4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc" exitCode=0 Feb 18 19:33:25 crc kubenswrapper[4942]: I0218 19:33:25.779410 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerDied","Data":"4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc"} Feb 18 19:33:25 crc kubenswrapper[4942]: I0218 19:33:25.863333 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.103:5353: connect: connection refused" Feb 18 19:33:26 crc kubenswrapper[4942]: I0218 19:33:26.117429 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.104:5353: connect: connection refused" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.714951 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.720425 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784139 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") pod \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784240 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") pod \"b34cdd67-e888-4718-8889-0dc284187fcc\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784287 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") pod \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784354 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") pod \"b34cdd67-e888-4718-8889-0dc284187fcc\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") pod \"b34cdd67-e888-4718-8889-0dc284187fcc\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784407 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") pod \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.799056 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4" (OuterVolumeSpecName: "kube-api-access-qmzc4") pod "b34cdd67-e888-4718-8889-0dc284187fcc" (UID: "b34cdd67-e888-4718-8889-0dc284187fcc"). InnerVolumeSpecName "kube-api-access-qmzc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.800693 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp" (OuterVolumeSpecName: "kube-api-access-vrdxp") pod "b7887418-e8d9-434c-a8e3-fed787cbc8c8" (UID: "b7887418-e8d9-434c-a8e3-fed787cbc8c8"). InnerVolumeSpecName "kube-api-access-vrdxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.813246 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.813239 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerDied","Data":"2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3"} Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.813371 4942 scope.go:117] "RemoveContainer" containerID="4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.815135 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerDied","Data":"56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f"} Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.815178 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.834462 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config" (OuterVolumeSpecName: "config") pod "b7887418-e8d9-434c-a8e3-fed787cbc8c8" (UID: "b7887418-e8d9-434c-a8e3-fed787cbc8c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.839135 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config" (OuterVolumeSpecName: "config") pod "b34cdd67-e888-4718-8889-0dc284187fcc" (UID: "b34cdd67-e888-4718-8889-0dc284187fcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.847263 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7887418-e8d9-434c-a8e3-fed787cbc8c8" (UID: "b7887418-e8d9-434c-a8e3-fed787cbc8c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.847859 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b34cdd67-e888-4718-8889-0dc284187fcc" (UID: "b34cdd67-e888-4718-8889-0dc284187fcc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886307 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886388 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886404 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886417 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886428 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886439 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.057527 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.138285 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.150329 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.158462 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.169437 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:29 crc kubenswrapper[4942]: W0218 19:33:29.550995 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd783b8b1_2938_4635_8a04_df942aa84383.slice/crio-448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb WatchSource:0}: Error finding container 448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb: Status 404 returned error can't find the container with id 448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.648438 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.648577 4942 scope.go:117] "RemoveContainer" containerID="031b9ea9109a76a2044d40e6de17d03777ea8f76aba5a0391d56eb6c10d14754" Feb 18 19:33:29 crc kubenswrapper[4942]: W0218 19:33:29.719817 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod782cbd43_a7c9_45f4_99e3_44fe770be6a5.slice/crio-81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b WatchSource:0}: Error finding container 81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b: Status 404 returned error can't find the container with id 81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.820787 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerStarted","Data":"448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb"} Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.822638 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerStarted","Data":"81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b"} Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.863942 4942 scope.go:117] "RemoveContainer" containerID="b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a" Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.954378 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-99sfz"] Feb 18 19:33:30 crc kubenswrapper[4942]: I0218 19:33:30.343265 4942 scope.go:117] "RemoveContainer" containerID="756562a4164ba39c406456f5f9881491ae21aa337026dce4848f70b89d661fc0" Feb 18 19:33:30 crc kubenswrapper[4942]: I0218 19:33:30.830339 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-99sfz" event={"ID":"c3b96f02-5a44-4d7e-842c-392c9a0a73f3","Type":"ContainerStarted","Data":"ac194fb62e391ae98cae815de8bacf33b14e11c43ab0d45b0e7c3ee83dbc6409"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.048887 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" path="/var/lib/kubelet/pods/b34cdd67-e888-4718-8889-0dc284187fcc/volumes" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.049950 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" path="/var/lib/kubelet/pods/b7887418-e8d9-434c-a8e3-fed787cbc8c8/volumes" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.841435 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"242ed220-c516-4f30-bb5b-69f28626101a","Type":"ContainerStarted","Data":"4f943f7c87633faeb2b85c6c602161dec57abba9259fc1f9a6aa3507c0e0a0df"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.841794 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.843197 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a1f9573-3ebf-4dbf-a269-938392cbd141","Type":"ContainerStarted","Data":"9873a75a0949d55b8ff400f2baf8d74407ceaed4597b522e83ec4925a27a4e86"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.844375 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerStarted","Data":"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.846334 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e39270f2-0125-43f1-a2b3-cda4813614dd","Type":"ContainerStarted","Data":"9f2574008b8624c11cba68f579e99cde6bb78c2c6d362c6e137f9045a10b1455"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.848126 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e07db76c-5ab3-430d-b9ad-eba96f02ab9e","Type":"ContainerStarted","Data":"ac00be85d3d8ee12a284874f5659d3c120ae8405f315e66cca8afed5300f1248"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.850333 4942 generic.go:334] "Generic (PLEG): container finished" podID="d783b8b1-2938-4635-8a04-df942aa84383" containerID="b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af" exitCode=0 Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.850381 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerDied","Data":"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.853229 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b9c56d4c-8421-4b07-992d-c0c45223259f","Type":"ContainerStarted","Data":"e3db5e44608e23b4b739d3efeab5f4582ba590d656f2c2a3f42a83bbd390e150"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.860635 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.046268619 podStartE2EDuration="22.860613368s" podCreationTimestamp="2026-02-18 19:33:09 +0000 UTC" firstStartedPulling="2026-02-18 19:33:20.753715003 +0000 UTC m=+960.458647668" lastFinishedPulling="2026-02-18 19:33:28.568059752 +0000 UTC m=+968.272992417" observedRunningTime="2026-02-18 19:33:31.854869911 +0000 UTC m=+971.559802596" watchObservedRunningTime="2026-02-18 19:33:31.860613368 +0000 UTC m=+971.565546033" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.863505 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f1712c-12df-4ca2-81d3-dc649c747868","Type":"ContainerStarted","Data":"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.864199 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.978297 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.416754996 podStartE2EDuration="19.978280466s" podCreationTimestamp="2026-02-18 19:33:12 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.124886442 +0000 UTC m=+960.829819107" lastFinishedPulling="2026-02-18 19:33:30.686411922 +0000 UTC m=+970.391344577" observedRunningTime="2026-02-18 19:33:31.973859162 +0000 UTC m=+971.678791827" watchObservedRunningTime="2026-02-18 19:33:31.978280466 +0000 UTC m=+971.683213131" Feb 18 19:33:32 crc kubenswrapper[4942]: I0218 19:33:32.870895 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerStarted","Data":"e242de7f4af5755759f500d3c9dbc2395ec18d3bfe3fe38cf008cae5b5314de3"} Feb 18 19:33:32 crc kubenswrapper[4942]: I0218 19:33:32.879466 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerStarted","Data":"cd738650668287e0a0fd738cee99e91bf889dfe4cc5467bd8f993b04d839a48f"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.889495 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"81a3193c7a82e4ed4f2a5322d29f8d82024b97bad905eacfd10f035fcf65ddf4"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.893716 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a1f9573-3ebf-4dbf-a269-938392cbd141","Type":"ContainerStarted","Data":"2abfd758292841dc6a4b717ca2ef392a774d75701b7d447419a69c6bfa7204e7"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.896338 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerStarted","Data":"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.897295 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.899132 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-99sfz" event={"ID":"c3b96f02-5a44-4d7e-842c-392c9a0a73f3","Type":"ContainerStarted","Data":"61290553a67df46acac39c3afc33067e38282f15c3f5725b2cbf755f2022bc98"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.900989 4942 generic.go:334] "Generic (PLEG): container finished" podID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerID="7e686888a5f0752dbf5d7b1d5a9c7b87451890452b5fc3cae41ff40186646673" exitCode=0 Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.901057 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerDied","Data":"7e686888a5f0752dbf5d7b1d5a9c7b87451890452b5fc3cae41ff40186646673"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.905837 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b9c56d4c-8421-4b07-992d-c0c45223259f","Type":"ContainerStarted","Data":"991fc750407177fd31754e5947897f96b9cb1885b5ff5270501481a94860c3d9"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.908807 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph" event={"ID":"28fe292c-6cda-4e3b-bce3-544ded95930b","Type":"ContainerStarted","Data":"83cfc67a9754d3914d9e4c2da74236b9954b79567c5032678f8aea995daedba3"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.909302 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-llsph" Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.911014 4942 generic.go:334] "Generic (PLEG): container finished" podID="a740e80f-15e5-4745-bb1d-96da2561f33b" containerID="cd738650668287e0a0fd738cee99e91bf889dfe4cc5467bd8f993b04d839a48f" exitCode=0 Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.911138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerDied","Data":"cd738650668287e0a0fd738cee99e91bf889dfe4cc5467bd8f993b04d839a48f"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.945061 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.669252565 podStartE2EDuration="15.945021487s" podCreationTimestamp="2026-02-18 19:33:18 +0000 UTC" firstStartedPulling="2026-02-18 19:33:23.210827781 +0000 UTC m=+962.915760446" lastFinishedPulling="2026-02-18 19:33:32.486596703 +0000 UTC m=+972.191529368" observedRunningTime="2026-02-18 19:33:33.938517901 +0000 UTC m=+973.643450576" watchObservedRunningTime="2026-02-18 19:33:33.945021487 +0000 UTC m=+973.649954152" Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.997016 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" podStartSLOduration=10.99699161 podStartE2EDuration="10.99699161s" podCreationTimestamp="2026-02-18 19:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:33.9688914 +0000 UTC m=+973.673824085" watchObservedRunningTime="2026-02-18 19:33:33.99699161 +0000 UTC m=+973.701924345" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.043227 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.023651081 podStartE2EDuration="20.043210496s" podCreationTimestamp="2026-02-18 19:33:14 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.486581809 +0000 UTC m=+961.191514474" lastFinishedPulling="2026-02-18 19:33:32.506141224 +0000 UTC m=+972.211073889" observedRunningTime="2026-02-18 19:33:34.033355663 +0000 UTC m=+973.738288328" watchObservedRunningTime="2026-02-18 19:33:34.043210496 +0000 UTC m=+973.748143161" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.073919 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-llsph" podStartSLOduration=10.602932863 podStartE2EDuration="19.07380879s" podCreationTimestamp="2026-02-18 19:33:15 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.329434828 +0000 UTC m=+961.034367493" lastFinishedPulling="2026-02-18 19:33:29.800310755 +0000 UTC m=+969.505243420" observedRunningTime="2026-02-18 19:33:34.052196026 +0000 UTC m=+973.757128681" watchObservedRunningTime="2026-02-18 19:33:34.07380879 +0000 UTC m=+973.778741475" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.085951 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-99sfz" podStartSLOduration=8.919069585999999 podStartE2EDuration="11.085935361s" podCreationTimestamp="2026-02-18 19:33:23 +0000 UTC" firstStartedPulling="2026-02-18 19:33:30.320944679 +0000 UTC m=+970.025877364" lastFinishedPulling="2026-02-18 19:33:32.487810464 +0000 UTC m=+972.192743139" observedRunningTime="2026-02-18 19:33:34.073212185 +0000 UTC m=+973.778144850" watchObservedRunningTime="2026-02-18 19:33:34.085935361 +0000 UTC m=+973.790868026" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.175437 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.211297 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.892542 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.893114 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.924495 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerStarted","Data":"4c29434e4ea049fcd7d697109defd55fb1b1f3dcfda8e150d2804fc7850db638"} Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.924544 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerStarted","Data":"3f0f52902ab2a0187531db04018e8f0d7cef935288255fc274a26a8773c0630f"} Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.924819 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.925063 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.929156 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerStarted","Data":"9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c"} Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.931033 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.956822 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7xrn9" podStartSLOduration=12.587098882 podStartE2EDuration="19.956802247s" podCreationTimestamp="2026-02-18 19:33:15 +0000 UTC" firstStartedPulling="2026-02-18 19:33:22.430475357 +0000 UTC m=+962.135408022" lastFinishedPulling="2026-02-18 19:33:29.800178712 +0000 UTC m=+969.505111387" observedRunningTime="2026-02-18 19:33:34.94987917 +0000 UTC m=+974.654811865" watchObservedRunningTime="2026-02-18 19:33:34.956802247 +0000 UTC m=+974.661734932" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.962642 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.998702 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" podStartSLOduration=11.998681121 podStartE2EDuration="11.998681121s" podCreationTimestamp="2026-02-18 19:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:34.9982347 +0000 UTC m=+974.703167365" watchObservedRunningTime="2026-02-18 19:33:34.998681121 +0000 UTC m=+974.703613796" Feb 18 19:33:35 crc kubenswrapper[4942]: I0218 19:33:35.936944 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.211620 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.945187 4942 generic.go:334] "Generic (PLEG): container finished" podID="e39270f2-0125-43f1-a2b3-cda4813614dd" containerID="9f2574008b8624c11cba68f579e99cde6bb78c2c6d362c6e137f9045a10b1455" exitCode=0 Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.945268 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e39270f2-0125-43f1-a2b3-cda4813614dd","Type":"ContainerDied","Data":"9f2574008b8624c11cba68f579e99cde6bb78c2c6d362c6e137f9045a10b1455"} Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.947032 4942 generic.go:334] "Generic (PLEG): container finished" podID="e07db76c-5ab3-430d-b9ad-eba96f02ab9e" containerID="ac00be85d3d8ee12a284874f5659d3c120ae8405f315e66cca8afed5300f1248" exitCode=0 Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.948164 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e07db76c-5ab3-430d-b9ad-eba96f02ab9e","Type":"ContainerDied","Data":"ac00be85d3d8ee12a284874f5659d3c120ae8405f315e66cca8afed5300f1248"} Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.011364 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194159 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:33:37 crc kubenswrapper[4942]: E0218 19:33:37.194575 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="init" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194593 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="init" Feb 18 19:33:37 crc kubenswrapper[4942]: E0218 19:33:37.194623 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="init" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194631 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="init" Feb 18 19:33:37 crc kubenswrapper[4942]: E0218 19:33:37.194669 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194678 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: E0218 19:33:37.194693 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194701 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194924 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194953 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.196032 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.206327 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.206541 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-tcwkq" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.206811 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.206997 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.223992 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264458 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-scripts\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264528 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264558 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-config\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264803 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj477\" (UniqueName: \"kubernetes.io/projected/056e639a-0805-4bb7-b0bd-620d9c67e6e2-kube-api-access-hj477\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264885 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264990 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.265071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366326 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj477\" (UniqueName: \"kubernetes.io/projected/056e639a-0805-4bb7-b0bd-620d9c67e6e2-kube-api-access-hj477\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366574 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366701 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366799 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366899 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-scripts\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366992 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.367074 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-config\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.367996 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-config\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.368959 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.369689 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-scripts\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.372068 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.372331 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.374750 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.387405 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj477\" (UniqueName: \"kubernetes.io/projected/056e639a-0805-4bb7-b0bd-620d9c67e6e2-kube-api-access-hj477\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.549180 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.955323 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e39270f2-0125-43f1-a2b3-cda4813614dd","Type":"ContainerStarted","Data":"0286ede997b7695538dfeed071898e1e86cab2be007088c478ce52669aef1735"} Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.958865 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e07db76c-5ab3-430d-b9ad-eba96f02ab9e","Type":"ContainerStarted","Data":"cffbd64c3ff3004c7ad067e5a837278a8d7674871fc6d3d9d098323e8ab8da52"} Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.974155 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.281420347 podStartE2EDuration="30.974138424s" podCreationTimestamp="2026-02-18 19:33:07 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.108008069 +0000 UTC m=+960.812940744" lastFinishedPulling="2026-02-18 19:33:29.800726166 +0000 UTC m=+969.505658821" observedRunningTime="2026-02-18 19:33:37.974116064 +0000 UTC m=+977.679048729" watchObservedRunningTime="2026-02-18 19:33:37.974138424 +0000 UTC m=+977.679071089" Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.009037 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.009200 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.599708368 podStartE2EDuration="30.009191353s" podCreationTimestamp="2026-02-18 19:33:08 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.16299116 +0000 UTC m=+960.867923835" lastFinishedPulling="2026-02-18 19:33:28.572474155 +0000 UTC m=+968.277406820" observedRunningTime="2026-02-18 19:33:38.003936418 +0000 UTC m=+977.708869103" watchObservedRunningTime="2026-02-18 19:33:38.009191353 +0000 UTC m=+977.714124018" Feb 18 19:33:38 crc kubenswrapper[4942]: W0218 19:33:38.011942 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod056e639a_0805_4bb7_b0bd_620d9c67e6e2.slice/crio-39a341ee1f5374f3198fbc5e02b8a61178c88f6523e86a4fc2e4569d2e94a39b WatchSource:0}: Error finding container 39a341ee1f5374f3198fbc5e02b8a61178c88f6523e86a4fc2e4569d2e94a39b: Status 404 returned error can't find the container with id 39a341ee1f5374f3198fbc5e02b8a61178c88f6523e86a4fc2e4569d2e94a39b Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.500731 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.500919 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.970034 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"056e639a-0805-4bb7-b0bd-620d9c67e6e2","Type":"ContainerStarted","Data":"39a341ee1f5374f3198fbc5e02b8a61178c88f6523e86a4fc2e4569d2e94a39b"} Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.277632 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.380459 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.455281 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.978947 4942 generic.go:334] "Generic (PLEG): container finished" podID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerID="81a3193c7a82e4ed4f2a5322d29f8d82024b97bad905eacfd10f035fcf65ddf4" exitCode=0 Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.979342 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" containerID="cri-o://9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c" gracePeriod=10 Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.979404 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"81a3193c7a82e4ed4f2a5322d29f8d82024b97bad905eacfd10f035fcf65ddf4"} Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.181111 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.182143 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.194974 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.990749 4942 generic.go:334] "Generic (PLEG): container finished" podID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerID="9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c" exitCode=0 Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.990847 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerDied","Data":"9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c"} Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.525682 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.529517 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.558008 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569448 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569543 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569567 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569586 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569634 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.579774 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.675734 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676191 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676249 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676270 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676295 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676706 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.677020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.677314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.678031 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.733633 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.855494 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.670718 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.683033 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.689073 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-b2nqs" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.689276 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.690400 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.690566 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.711857 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795677 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdbb5-76a8-450f-b645-2133024a1bd0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795734 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795826 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795885 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-cache\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795903 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-lock\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795957 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47m7\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-kube-api-access-f47m7\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.897873 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47m7\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-kube-api-access-f47m7\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.897962 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdbb5-76a8-450f-b645-2133024a1bd0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.897985 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898028 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898066 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-cache\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898086 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-lock\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: E0218 19:33:43.898274 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:43 crc kubenswrapper[4942]: E0218 19:33:43.898300 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:43 crc kubenswrapper[4942]: E0218 19:33:43.898376 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:44.398357545 +0000 UTC m=+984.103290210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898668 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-lock\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898934 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898941 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-cache\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.905522 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdbb5-76a8-450f-b645-2133024a1bd0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.918643 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47m7\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-kube-api-access-f47m7\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.934958 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.119066 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:33:44 crc kubenswrapper[4942]: W0218 19:33:44.119504 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e919317_cae2_432d_959f_8cf1d4520b56.slice/crio-78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c WatchSource:0}: Error finding container 78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c: Status 404 returned error can't find the container with id 78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.216794 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cwjhb"] Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.221116 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.224993 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.226230 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.226280 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.230291 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cwjhb"] Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.275704 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305387 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305526 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305553 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305627 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305660 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305721 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305746 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407215 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407652 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407874 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407976 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408053 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408061 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: E0218 19:33:44.408110 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:44 crc kubenswrapper[4942]: E0218 19:33:44.408135 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408146 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: E0218 19:33:44.408189 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:45.408171571 +0000 UTC m=+985.113104246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408254 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408756 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408954 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.410652 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.411896 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.412268 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.427877 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.596255 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.021089 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"056e639a-0805-4bb7-b0bd-620d9c67e6e2","Type":"ContainerStarted","Data":"9f17c3fd7994cefbe90968aeaa9c74ad52d177060d2b0fb715dfe39a46d6af5f"} Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.022625 4942 generic.go:334] "Generic (PLEG): container finished" podID="1e919317-cae2-432d-959f-8cf1d4520b56" containerID="2d800ad31d40bf814e416ec398183ae11509cddedf514a96b60bf309617fbbde" exitCode=0 Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.022665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerDied","Data":"2d800ad31d40bf814e416ec398183ae11509cddedf514a96b60bf309617fbbde"} Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.022680 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerStarted","Data":"78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c"} Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.135986 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cwjhb"] Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.241414 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.333912 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") pod \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.333959 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") pod \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.334113 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") pod \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.334197 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") pod \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.342943 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq" (OuterVolumeSpecName: "kube-api-access-5gltq") pod "782cbd43-a7c9-45f4-99e3-44fe770be6a5" (UID: "782cbd43-a7c9-45f4-99e3-44fe770be6a5"). InnerVolumeSpecName "kube-api-access-5gltq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.371529 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "782cbd43-a7c9-45f4-99e3-44fe770be6a5" (UID: "782cbd43-a7c9-45f4-99e3-44fe770be6a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.380715 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "782cbd43-a7c9-45f4-99e3-44fe770be6a5" (UID: "782cbd43-a7c9-45f4-99e3-44fe770be6a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.387917 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config" (OuterVolumeSpecName: "config") pod "782cbd43-a7c9-45f4-99e3-44fe770be6a5" (UID: "782cbd43-a7c9-45f4-99e3-44fe770be6a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436489 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436562 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436576 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436586 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436595 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:45 crc kubenswrapper[4942]: E0218 19:33:45.436733 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:45 crc kubenswrapper[4942]: E0218 19:33:45.436774 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:45 crc kubenswrapper[4942]: E0218 19:33:45.436846 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:47.436823304 +0000 UTC m=+987.141756019 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:45 crc kubenswrapper[4942]: E0218 19:33:45.816485 4942 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:39186->38.102.83.188:38981: write tcp 38.102.83.188:39186->38.102.83.188:38981: write: broken pipe Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.033052 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwjhb" event={"ID":"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a","Type":"ContainerStarted","Data":"98b157f8537f821e0f49062fdd12779fd66abc1af86316a5e1b821365807dd5d"} Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.035180 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerStarted","Data":"c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7"} Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.035281 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.037418 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"056e639a-0805-4bb7-b0bd-620d9c67e6e2","Type":"ContainerStarted","Data":"c9513720d4cb93d7288ea798ccc25fec83217b1fdfa20e14c3870e1e4c7ac099"} Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.037995 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.039890 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerDied","Data":"81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b"} Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.039935 4942 scope.go:117] "RemoveContainer" containerID="9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.040000 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.053097 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-nnzck" podStartSLOduration=4.053074479 podStartE2EDuration="4.053074479s" podCreationTimestamp="2026-02-18 19:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:46.051612132 +0000 UTC m=+985.756544817" watchObservedRunningTime="2026-02-18 19:33:46.053074479 +0000 UTC m=+985.758007144" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.069933 4942 scope.go:117] "RemoveContainer" containerID="7e686888a5f0752dbf5d7b1d5a9c7b87451890452b5fc3cae41ff40186646673" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.073512 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=7.326389623 podStartE2EDuration="9.073499403s" podCreationTimestamp="2026-02-18 19:33:37 +0000 UTC" firstStartedPulling="2026-02-18 19:33:38.014088679 +0000 UTC m=+977.719021344" lastFinishedPulling="2026-02-18 19:33:39.761198459 +0000 UTC m=+979.466131124" observedRunningTime="2026-02-18 19:33:46.071684677 +0000 UTC m=+985.776617352" watchObservedRunningTime="2026-02-18 19:33:46.073499403 +0000 UTC m=+985.778432068" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.109068 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.110852 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:47 crc kubenswrapper[4942]: I0218 19:33:47.059605 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" path="/var/lib/kubelet/pods/782cbd43-a7c9-45f4-99e3-44fe770be6a5/volumes" Feb 18 19:33:47 crc kubenswrapper[4942]: I0218 19:33:47.478942 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:47 crc kubenswrapper[4942]: E0218 19:33:47.479189 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:47 crc kubenswrapper[4942]: E0218 19:33:47.479369 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:47 crc kubenswrapper[4942]: E0218 19:33:47.479442 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:51.479416121 +0000 UTC m=+991.184348796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.299052 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.378773 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.854366 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:48 crc kubenswrapper[4942]: E0218 19:33:48.854700 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.854716 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" Feb 18 19:33:48 crc kubenswrapper[4942]: E0218 19:33:48.854746 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="init" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.854752 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="init" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.854920 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.855484 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.859078 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.878289 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.916311 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.916367 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.018605 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.019628 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.020617 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.037322 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.173048 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:50 crc kubenswrapper[4942]: I0218 19:33:50.639821 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 19:33:50 crc kubenswrapper[4942]: I0218 19:33:50.717699 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.292781 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.293890 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.302739 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.363609 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.363691 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.414441 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.415473 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.419325 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.424554 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.465723 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.465810 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.465878 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.465924 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.466674 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.499344 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.505630 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.506794 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.517451 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567015 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567338 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567375 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567418 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567454 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: E0218 19:33:51.567577 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:51 crc kubenswrapper[4942]: E0218 19:33:51.567596 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:51 crc kubenswrapper[4942]: E0218 19:33:51.567641 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:59.567624894 +0000 UTC m=+999.272557559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.568133 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.598534 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.613508 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.620790 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.623864 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.626628 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.629543 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.668858 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.668900 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.668937 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.669016 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.669746 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.707463 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.739329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.770255 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.770346 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.770984 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.795335 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.849121 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.945049 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:52 crc kubenswrapper[4942]: W0218 19:33:52.331795 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod646ba630_1210_431d_8902_b5c0968b35bb.slice/crio-c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6 WatchSource:0}: Error finding container c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6: Status 404 returned error can't find the container with id c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6 Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.335057 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.457802 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.471488 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.542796 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.571862 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.573385 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.578365 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.643297 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.644870 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.649194 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.659112 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.672414 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.695581 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.695624 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.695685 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.695728 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.796996 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.797137 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.797173 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.797253 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.798288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.799240 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.819609 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.819820 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.836640 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.856893 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.910351 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.910842 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="dnsmasq-dns" containerID="cri-o://d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" gracePeriod=10 Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.110064 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.128283 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4vztq" event={"ID":"7ae58df9-2a9f-4592-a806-b6f5efd71155","Type":"ContainerStarted","Data":"f5b0e5f07640ac134e229b85e5cd422e569347ef8859ca3988fc0a14ab76decb"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.146158 4942 generic.go:334] "Generic (PLEG): container finished" podID="646ba630-1210-431d-8902-b5c0968b35bb" containerID="7973de763d55a77ffbc3e3d1001daee7ca68a526d4309188caa67a4ce4135e55" exitCode=0 Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.147048 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9d4-account-create-update-7gsvf" event={"ID":"646ba630-1210-431d-8902-b5c0968b35bb","Type":"ContainerDied","Data":"7973de763d55a77ffbc3e3d1001daee7ca68a526d4309188caa67a4ce4135e55"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.147127 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9d4-account-create-update-7gsvf" event={"ID":"646ba630-1210-431d-8902-b5c0968b35bb","Type":"ContainerStarted","Data":"c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.152478 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwjhb" event={"ID":"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a","Type":"ContainerStarted","Data":"55829c9fbf3eef2bdd3e7606f5ad7942662f83792b2404329b3607ab1503d0ae"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.177482 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9xsbj" event={"ID":"6821c713-6163-44f5-a749-415f0c1d8337","Type":"ContainerStarted","Data":"1eb3204b9b0589d490ccc1c18591bfe59c0e4d3c2638fc8531a3fb7550c8d9bf"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.185589 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cwjhb" podStartSLOduration=2.466857222 podStartE2EDuration="9.18556742s" podCreationTimestamp="2026-02-18 19:33:44 +0000 UTC" firstStartedPulling="2026-02-18 19:33:45.158090635 +0000 UTC m=+984.863023290" lastFinishedPulling="2026-02-18 19:33:51.876800833 +0000 UTC m=+991.581733488" observedRunningTime="2026-02-18 19:33:53.178222462 +0000 UTC m=+992.883155127" watchObservedRunningTime="2026-02-18 19:33:53.18556742 +0000 UTC m=+992.890500085" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.196857 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.202599 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h49cz" event={"ID":"a3564c8a-5e18-4c53-b225-7e9baf41a371","Type":"ContainerStarted","Data":"66f2076a4d3224486921697544c5266b6a3f5f3fd789e15549a3d73d0240e056"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.204352 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ce28-account-create-update-h5jjz" event={"ID":"371430b6-c9b6-48ba-a1a7-d1ce72a001ec","Type":"ContainerStarted","Data":"4e49158c977b69109020d9375918418b28e7f6670849fc1495f27f4bb36f8420"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.204378 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ce28-account-create-update-h5jjz" event={"ID":"371430b6-c9b6-48ba-a1a7-d1ce72a001ec","Type":"ContainerStarted","Data":"eee16dd4bd5b8b487af0f0974bd123a4773635c28b49751dee93789a473f7b0b"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.227018 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ce28-account-create-update-h5jjz" podStartSLOduration=2.226998103 podStartE2EDuration="2.226998103s" podCreationTimestamp="2026-02-18 19:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:53.225077414 +0000 UTC m=+992.930010079" watchObservedRunningTime="2026-02-18 19:33:53.226998103 +0000 UTC m=+992.931930778" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.314974 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.594884 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.717922 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.718325 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.718353 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.718469 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.718535 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: W0218 19:33:53.727728 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f4f7b72_968a_4aed_b6e9_87f43677f342.slice/crio-2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033 WatchSource:0}: Error finding container 2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033: Status 404 returned error can't find the container with id 2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033 Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.729163 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.740342 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.740452 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.741099 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8" (OuterVolumeSpecName: "kube-api-access-xzjg8") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "kube-api-access-xzjg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.763470 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.764628 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config" (OuterVolumeSpecName: "config") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.767087 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.770228 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820075 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820108 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820118 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820168 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820177 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.212811 4942 generic.go:334] "Generic (PLEG): container finished" podID="6821c713-6163-44f5-a749-415f0c1d8337" containerID="761092c069dfd66382418fe07bf3c15f0aee53ccbdf6b11196e33385aae3fc8b" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.213162 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9xsbj" event={"ID":"6821c713-6163-44f5-a749-415f0c1d8337","Type":"ContainerDied","Data":"761092c069dfd66382418fe07bf3c15f0aee53ccbdf6b11196e33385aae3fc8b"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.214866 4942 generic.go:334] "Generic (PLEG): container finished" podID="a3564c8a-5e18-4c53-b225-7e9baf41a371" containerID="f3ac5111bbb6bd92f96a1d8bfbfe931ddce997416181ddc95500cf9c11a42867" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.214977 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h49cz" event={"ID":"a3564c8a-5e18-4c53-b225-7e9baf41a371","Type":"ContainerDied","Data":"f3ac5111bbb6bd92f96a1d8bfbfe931ddce997416181ddc95500cf9c11a42867"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.216740 4942 generic.go:334] "Generic (PLEG): container finished" podID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" containerID="4e49158c977b69109020d9375918418b28e7f6670849fc1495f27f4bb36f8420" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.216891 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ce28-account-create-update-h5jjz" event={"ID":"371430b6-c9b6-48ba-a1a7-d1ce72a001ec","Type":"ContainerDied","Data":"4e49158c977b69109020d9375918418b28e7f6670849fc1495f27f4bb36f8420"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.219031 4942 generic.go:334] "Generic (PLEG): container finished" podID="7ae58df9-2a9f-4592-a806-b6f5efd71155" containerID="91775cfa347502e2c1757de451b7156448b5de2986ec185b6afdfe4b5a592293" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.219080 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4vztq" event={"ID":"7ae58df9-2a9f-4592-a806-b6f5efd71155","Type":"ContainerDied","Data":"91775cfa347502e2c1757de451b7156448b5de2986ec185b6afdfe4b5a592293"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.221711 4942 generic.go:334] "Generic (PLEG): container finished" podID="d783b8b1-2938-4635-8a04-df942aa84383" containerID="d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.221930 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerDied","Data":"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.222044 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerDied","Data":"448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.222135 4942 scope.go:117] "RemoveContainer" containerID="d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.222341 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.230944 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-59tjm" event={"ID":"2f4f7b72-968a-4aed-b6e9-87f43677f342","Type":"ContainerStarted","Data":"2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.233571 4942 generic.go:334] "Generic (PLEG): container finished" podID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" containerID="376d0fc77c68f0c59dee539c15e1e9f915935989d2e259a07dc205d03784efe9" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.234415 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9457-account-create-update-5hrw4" event={"ID":"ba056ec7-86a5-43b6-aebd-a22b21843cc3","Type":"ContainerDied","Data":"376d0fc77c68f0c59dee539c15e1e9f915935989d2e259a07dc205d03784efe9"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.234447 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9457-account-create-update-5hrw4" event={"ID":"ba056ec7-86a5-43b6-aebd-a22b21843cc3","Type":"ContainerStarted","Data":"38b3c170c47184369c1ec21f9724664ac8066d16e78c42b7d539b1e87174297e"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.284366 4942 scope.go:117] "RemoveContainer" containerID="b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.327035 4942 scope.go:117] "RemoveContainer" containerID="d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" Feb 18 19:33:54 crc kubenswrapper[4942]: E0218 19:33:54.328713 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164\": container with ID starting with d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164 not found: ID does not exist" containerID="d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.328782 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164"} err="failed to get container status \"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164\": rpc error: code = NotFound desc = could not find container \"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164\": container with ID starting with d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164 not found: ID does not exist" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.328819 4942 scope.go:117] "RemoveContainer" containerID="b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af" Feb 18 19:33:54 crc kubenswrapper[4942]: E0218 19:33:54.329305 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af\": container with ID starting with b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af not found: ID does not exist" containerID="b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.329343 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af"} err="failed to get container status \"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af\": rpc error: code = NotFound desc = could not find container \"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af\": container with ID starting with b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af not found: ID does not exist" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.362904 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.369469 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.610068 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.642787 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") pod \"646ba630-1210-431d-8902-b5c0968b35bb\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.642902 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") pod \"646ba630-1210-431d-8902-b5c0968b35bb\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.644023 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "646ba630-1210-431d-8902-b5c0968b35bb" (UID: "646ba630-1210-431d-8902-b5c0968b35bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.650231 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24" (OuterVolumeSpecName: "kube-api-access-rcd24") pod "646ba630-1210-431d-8902-b5c0968b35bb" (UID: "646ba630-1210-431d-8902-b5c0968b35bb"). InnerVolumeSpecName "kube-api-access-rcd24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.745424 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.745470 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.049164 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d783b8b1-2938-4635-8a04-df942aa84383" path="/var/lib/kubelet/pods/d783b8b1-2938-4635-8a04-df942aa84383/volumes" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.246012 4942 generic.go:334] "Generic (PLEG): container finished" podID="2f4f7b72-968a-4aed-b6e9-87f43677f342" containerID="837718ff91cb054c2e7fe10e6239bf44f02d0dd7d7855db97e09e837f3dcef65" exitCode=0 Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.246098 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-59tjm" event={"ID":"2f4f7b72-968a-4aed-b6e9-87f43677f342","Type":"ContainerDied","Data":"837718ff91cb054c2e7fe10e6239bf44f02d0dd7d7855db97e09e837f3dcef65"} Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.253513 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9d4-account-create-update-7gsvf" event={"ID":"646ba630-1210-431d-8902-b5c0968b35bb","Type":"ContainerDied","Data":"c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6"} Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.253554 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.253611 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.272957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f"} Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.708802 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.796047 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") pod \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.796236 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") pod \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.797407 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba056ec7-86a5-43b6-aebd-a22b21843cc3" (UID: "ba056ec7-86a5-43b6-aebd-a22b21843cc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.810469 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74" (OuterVolumeSpecName: "kube-api-access-7jh74") pod "ba056ec7-86a5-43b6-aebd-a22b21843cc3" (UID: "ba056ec7-86a5-43b6-aebd-a22b21843cc3"). InnerVolumeSpecName "kube-api-access-7jh74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.904628 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.904658 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.969886 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.975026 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.983881 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.991270 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108218 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") pod \"6821c713-6163-44f5-a749-415f0c1d8337\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108263 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") pod \"a3564c8a-5e18-4c53-b225-7e9baf41a371\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108303 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") pod \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108341 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") pod \"7ae58df9-2a9f-4592-a806-b6f5efd71155\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108492 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") pod \"7ae58df9-2a9f-4592-a806-b6f5efd71155\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108517 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") pod \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108582 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") pod \"a3564c8a-5e18-4c53-b225-7e9baf41a371\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108645 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") pod \"6821c713-6163-44f5-a749-415f0c1d8337\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108871 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3564c8a-5e18-4c53-b225-7e9baf41a371" (UID: "a3564c8a-5e18-4c53-b225-7e9baf41a371"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.109299 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.109654 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ae58df9-2a9f-4592-a806-b6f5efd71155" (UID: "7ae58df9-2a9f-4592-a806-b6f5efd71155"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.110476 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6821c713-6163-44f5-a749-415f0c1d8337" (UID: "6821c713-6163-44f5-a749-415f0c1d8337"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.111154 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "371430b6-c9b6-48ba-a1a7-d1ce72a001ec" (UID: "371430b6-c9b6-48ba-a1a7-d1ce72a001ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.112844 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs" (OuterVolumeSpecName: "kube-api-access-mvxgs") pod "371430b6-c9b6-48ba-a1a7-d1ce72a001ec" (UID: "371430b6-c9b6-48ba-a1a7-d1ce72a001ec"). InnerVolumeSpecName "kube-api-access-mvxgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.112879 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9" (OuterVolumeSpecName: "kube-api-access-rf9x9") pod "6821c713-6163-44f5-a749-415f0c1d8337" (UID: "6821c713-6163-44f5-a749-415f0c1d8337"). InnerVolumeSpecName "kube-api-access-rf9x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.113690 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9" (OuterVolumeSpecName: "kube-api-access-t7rl9") pod "7ae58df9-2a9f-4592-a806-b6f5efd71155" (UID: "7ae58df9-2a9f-4592-a806-b6f5efd71155"). InnerVolumeSpecName "kube-api-access-t7rl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.114214 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2" (OuterVolumeSpecName: "kube-api-access-76ck2") pod "a3564c8a-5e18-4c53-b225-7e9baf41a371" (UID: "a3564c8a-5e18-4c53-b225-7e9baf41a371"). InnerVolumeSpecName "kube-api-access-76ck2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210846 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210882 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210893 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210902 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210912 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210921 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210932 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.299830 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.299805 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9457-account-create-update-5hrw4" event={"ID":"ba056ec7-86a5-43b6-aebd-a22b21843cc3","Type":"ContainerDied","Data":"38b3c170c47184369c1ec21f9724664ac8066d16e78c42b7d539b1e87174297e"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.300163 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b3c170c47184369c1ec21f9724664ac8066d16e78c42b7d539b1e87174297e" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.303063 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.303064 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9xsbj" event={"ID":"6821c713-6163-44f5-a749-415f0c1d8337","Type":"ContainerDied","Data":"1eb3204b9b0589d490ccc1c18591bfe59c0e4d3c2638fc8531a3fb7550c8d9bf"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.303091 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb3204b9b0589d490ccc1c18591bfe59c0e4d3c2638fc8531a3fb7550c8d9bf" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.314520 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h49cz" event={"ID":"a3564c8a-5e18-4c53-b225-7e9baf41a371","Type":"ContainerDied","Data":"66f2076a4d3224486921697544c5266b6a3f5f3fd789e15549a3d73d0240e056"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.314564 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f2076a4d3224486921697544c5266b6a3f5f3fd789e15549a3d73d0240e056" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.314638 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.316707 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ce28-account-create-update-h5jjz" event={"ID":"371430b6-c9b6-48ba-a1a7-d1ce72a001ec","Type":"ContainerDied","Data":"eee16dd4bd5b8b487af0f0974bd123a4773635c28b49751dee93789a473f7b0b"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.316750 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee16dd4bd5b8b487af0f0974bd123a4773635c28b49751dee93789a473f7b0b" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.316839 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.320907 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4vztq" event={"ID":"7ae58df9-2a9f-4592-a806-b6f5efd71155","Type":"ContainerDied","Data":"f5b0e5f07640ac134e229b85e5cd422e569347ef8859ca3988fc0a14ab76decb"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.320950 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b0e5f07640ac134e229b85e5cd422e569347ef8859ca3988fc0a14ab76decb" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.320993 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.720696 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.820945 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") pod \"2f4f7b72-968a-4aed-b6e9-87f43677f342\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.821067 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") pod \"2f4f7b72-968a-4aed-b6e9-87f43677f342\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.821588 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f4f7b72-968a-4aed-b6e9-87f43677f342" (UID: "2f4f7b72-968a-4aed-b6e9-87f43677f342"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.829199 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh" (OuterVolumeSpecName: "kube-api-access-cc7sh") pod "2f4f7b72-968a-4aed-b6e9-87f43677f342" (UID: "2f4f7b72-968a-4aed-b6e9-87f43677f342"). InnerVolumeSpecName "kube-api-access-cc7sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.923540 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.923583 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.152669 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.160444 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.240654 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.243685 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="init" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.243863 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="init" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.243952 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae58df9-2a9f-4592-a806-b6f5efd71155" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.244029 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae58df9-2a9f-4592-a806-b6f5efd71155" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.244109 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3564c8a-5e18-4c53-b225-7e9baf41a371" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.244182 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3564c8a-5e18-4c53-b225-7e9baf41a371" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.244366 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646ba630-1210-431d-8902-b5c0968b35bb" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.244450 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="646ba630-1210-431d-8902-b5c0968b35bb" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.244530 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="dnsmasq-dns" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.244796 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="dnsmasq-dns" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.244955 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4f7b72-968a-4aed-b6e9-87f43677f342" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245039 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4f7b72-968a-4aed-b6e9-87f43677f342" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.245128 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6821c713-6163-44f5-a749-415f0c1d8337" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245249 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="6821c713-6163-44f5-a749-415f0c1d8337" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.245355 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245443 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.245537 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245632 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245897 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3564c8a-5e18-4c53-b225-7e9baf41a371" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246003 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae58df9-2a9f-4592-a806-b6f5efd71155" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246123 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="dnsmasq-dns" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246197 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="646ba630-1210-431d-8902-b5c0968b35bb" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246285 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246367 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246448 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="6821c713-6163-44f5-a749-415f0c1d8337" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246523 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4f7b72-968a-4aed-b6e9-87f43677f342" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.247255 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.250244 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.251272 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330367 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330511 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330818 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-59tjm" event={"ID":"2f4f7b72-968a-4aed-b6e9-87f43677f342","Type":"ContainerDied","Data":"2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033"} Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330852 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330981 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.431852 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.432327 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.433146 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.458582 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.584497 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.614431 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 19:33:58 crc kubenswrapper[4942]: I0218 19:33:58.821266 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.055679 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae58df9-2a9f-4592-a806-b6f5efd71155" path="/var/lib/kubelet/pods/7ae58df9-2a9f-4592-a806-b6f5efd71155/volumes" Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.352585 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b"} Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.354535 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7rkd" event={"ID":"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb","Type":"ContainerStarted","Data":"3ca7995811727ed16b81c6dacf4b796cf8cb865100445c8661ce6034aba901d3"} Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.354577 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7rkd" event={"ID":"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb","Type":"ContainerStarted","Data":"9091bd51dd260200eceb22826dedd139c79e73e5248d64dfbd7f691b19339ef5"} Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.389115 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=10.209447248 podStartE2EDuration="47.389100825s" podCreationTimestamp="2026-02-18 19:33:12 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.134631212 +0000 UTC m=+960.839563877" lastFinishedPulling="2026-02-18 19:33:58.314284789 +0000 UTC m=+998.019217454" observedRunningTime="2026-02-18 19:33:59.387436303 +0000 UTC m=+999.092368968" watchObservedRunningTime="2026-02-18 19:33:59.389100825 +0000 UTC m=+999.094033490" Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.410737 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-g7rkd" podStartSLOduration=2.41071641 podStartE2EDuration="2.41071641s" podCreationTimestamp="2026-02-18 19:33:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:59.409652982 +0000 UTC m=+999.114585687" watchObservedRunningTime="2026-02-18 19:33:59.41071641 +0000 UTC m=+999.115649075" Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.602918 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:59 crc kubenswrapper[4942]: E0218 19:33:59.603164 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:59 crc kubenswrapper[4942]: E0218 19:33:59.603933 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:59 crc kubenswrapper[4942]: E0218 19:33:59.604019 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:15.603995437 +0000 UTC m=+1015.308928102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.334218 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.335715 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.354534 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.365899 4942 generic.go:334] "Generic (PLEG): container finished" podID="2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" containerID="55829c9fbf3eef2bdd3e7606f5ad7942662f83792b2404329b3607ab1503d0ae" exitCode=0 Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.365967 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwjhb" event={"ID":"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a","Type":"ContainerDied","Data":"55829c9fbf3eef2bdd3e7606f5ad7942662f83792b2404329b3607ab1503d0ae"} Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.368220 4942 generic.go:334] "Generic (PLEG): container finished" podID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" containerID="3ca7995811727ed16b81c6dacf4b796cf8cb865100445c8661ce6034aba901d3" exitCode=0 Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.368263 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7rkd" event={"ID":"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb","Type":"ContainerDied","Data":"3ca7995811727ed16b81c6dacf4b796cf8cb865100445c8661ce6034aba901d3"} Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.516626 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.516676 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.537336 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.538573 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.541860 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.550697 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.622649 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.622745 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.622793 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.622825 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.623653 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.642273 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.694531 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.723889 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.724030 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.724652 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.743290 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.857713 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.136745 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:34:01 crc kubenswrapper[4942]: W0218 19:34:01.139655 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a1ca129_f896_4d68_b119_701a991fe0ba.slice/crio-48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17 WatchSource:0}: Error finding container 48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17: Status 404 returned error can't find the container with id 48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17 Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.291680 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:34:01 crc kubenswrapper[4942]: W0218 19:34:01.292746 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8611c14f_da0c_410e_9c3a_dc6cb5a698a7.slice/crio-212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46 WatchSource:0}: Error finding container 212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46: Status 404 returned error can't find the container with id 212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46 Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.378747 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8ff9-account-create-update-k7n8f" event={"ID":"8611c14f-da0c-410e-9c3a-dc6cb5a698a7","Type":"ContainerStarted","Data":"212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46"} Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.380845 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjf5x" event={"ID":"6a1ca129-f896-4d68-b119-701a991fe0ba","Type":"ContainerStarted","Data":"c942add3a433a64faf7638403a168e22e7b5e2f26ceaa17e1731c6044072942d"} Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.380900 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjf5x" event={"ID":"6a1ca129-f896-4d68-b119-701a991fe0ba","Type":"ContainerStarted","Data":"48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17"} Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.407125 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-tjf5x" podStartSLOduration=1.407104583 podStartE2EDuration="1.407104583s" podCreationTimestamp="2026-02-18 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:01.397136727 +0000 UTC m=+1001.102069392" watchObservedRunningTime="2026-02-18 19:34:01.407104583 +0000 UTC m=+1001.112037268" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.844791 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.851158 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7rkd" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944192 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944244 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") pod \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944297 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944323 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944400 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944415 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944463 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944478 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") pod \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.946679 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.946690 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" (UID: "7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.949736 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.955978 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm" (OuterVolumeSpecName: "kube-api-access-8mkzm") pod "7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" (UID: "7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb"). InnerVolumeSpecName "kube-api-access-8mkzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.956202 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf" (OuterVolumeSpecName: "kube-api-access-8wqmf") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "kube-api-access-8wqmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.958851 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.987741 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.988258 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.994162 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts" (OuterVolumeSpecName: "scripts") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047018 4942 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047052 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047061 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047072 4942 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047080 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047088 4942 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047097 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047106 4942 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047114 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.394400 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7rkd" event={"ID":"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb","Type":"ContainerDied","Data":"9091bd51dd260200eceb22826dedd139c79e73e5248d64dfbd7f691b19339ef5"} Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.394437 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9091bd51dd260200eceb22826dedd139c79e73e5248d64dfbd7f691b19339ef5" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.394488 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7rkd" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.408812 4942 generic.go:334] "Generic (PLEG): container finished" podID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" containerID="549770ba7dc9b2efdf1b7dbd1827ec366b9e1e693aeec0f1a695091cdbeda9bc" exitCode=0 Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.408922 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8ff9-account-create-update-k7n8f" event={"ID":"8611c14f-da0c-410e-9c3a-dc6cb5a698a7","Type":"ContainerDied","Data":"549770ba7dc9b2efdf1b7dbd1827ec366b9e1e693aeec0f1a695091cdbeda9bc"} Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.421744 4942 generic.go:334] "Generic (PLEG): container finished" podID="6a1ca129-f896-4d68-b119-701a991fe0ba" containerID="c942add3a433a64faf7638403a168e22e7b5e2f26ceaa17e1731c6044072942d" exitCode=0 Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.422016 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjf5x" event={"ID":"6a1ca129-f896-4d68-b119-701a991fe0ba","Type":"ContainerDied","Data":"c942add3a433a64faf7638403a168e22e7b5e2f26ceaa17e1731c6044072942d"} Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.426670 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwjhb" event={"ID":"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a","Type":"ContainerDied","Data":"98b157f8537f821e0f49062fdd12779fd66abc1af86316a5e1b821365807dd5d"} Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.426706 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b157f8537f821e0f49062fdd12779fd66abc1af86316a5e1b821365807dd5d" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.426788 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.873837 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.881247 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.882244 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.889855 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.977954 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.987008 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") pod \"6a1ca129-f896-4d68-b119-701a991fe0ba\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.987076 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") pod \"6a1ca129-f896-4d68-b119-701a991fe0ba\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.987122 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") pod \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.987141 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") pod \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.988157 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8611c14f-da0c-410e-9c3a-dc6cb5a698a7" (UID: "8611c14f-da0c-410e-9c3a-dc6cb5a698a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.988277 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a1ca129-f896-4d68-b119-701a991fe0ba" (UID: "6a1ca129-f896-4d68-b119-701a991fe0ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.994537 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5" (OuterVolumeSpecName: "kube-api-access-lzmr5") pod "8611c14f-da0c-410e-9c3a-dc6cb5a698a7" (UID: "8611c14f-da0c-410e-9c3a-dc6cb5a698a7"). InnerVolumeSpecName "kube-api-access-lzmr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.996209 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8" (OuterVolumeSpecName: "kube-api-access-7qkc8") pod "6a1ca129-f896-4d68-b119-701a991fe0ba" (UID: "6a1ca129-f896-4d68-b119-701a991fe0ba"). InnerVolumeSpecName "kube-api-access-7qkc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.089120 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.089154 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.089168 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.089208 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.459086 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjf5x" event={"ID":"6a1ca129-f896-4d68-b119-701a991fe0ba","Type":"ContainerDied","Data":"48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17"} Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.459460 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.459605 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.461548 4942 generic.go:334] "Generic (PLEG): container finished" podID="b6b41292-c562-4964-bb25-d8945415b3da" containerID="c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782" exitCode=0 Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.461826 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerDied","Data":"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782"} Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.467279 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8ff9-account-create-update-k7n8f" event={"ID":"8611c14f-da0c-410e-9c3a-dc6cb5a698a7","Type":"ContainerDied","Data":"212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46"} Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.467346 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.467469 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.046842 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" path="/var/lib/kubelet/pods/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb/volumes" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.478451 4942 generic.go:334] "Generic (PLEG): container finished" podID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerID="e242de7f4af5755759f500d3c9dbc2395ec18d3bfe3fe38cf008cae5b5314de3" exitCode=0 Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.478534 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerDied","Data":"e242de7f4af5755759f500d3c9dbc2395ec18d3bfe3fe38cf008cae5b5314de3"} Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.481261 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerStarted","Data":"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94"} Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.481923 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.551670 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.589590562 podStartE2EDuration="1m0.55165067s" podCreationTimestamp="2026-02-18 19:33:05 +0000 UTC" firstStartedPulling="2026-02-18 19:33:20.608427606 +0000 UTC m=+960.313360271" lastFinishedPulling="2026-02-18 19:33:28.570487714 +0000 UTC m=+968.275420379" observedRunningTime="2026-02-18 19:34:05.539351144 +0000 UTC m=+1005.244283829" watchObservedRunningTime="2026-02-18 19:34:05.55165067 +0000 UTC m=+1005.256583335" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.680995 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:34:05 crc kubenswrapper[4942]: E0218 19:34:05.681353 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681371 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: E0218 19:34:05.681387 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" containerName="swift-ring-rebalance" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681393 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" containerName="swift-ring-rebalance" Feb 18 19:34:05 crc kubenswrapper[4942]: E0218 19:34:05.681408 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ca129-f896-4d68-b119-701a991fe0ba" containerName="mariadb-database-create" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681416 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ca129-f896-4d68-b119-701a991fe0ba" containerName="mariadb-database-create" Feb 18 19:34:05 crc kubenswrapper[4942]: E0218 19:34:05.681427 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681434 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681575 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" containerName="swift-ring-rebalance" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681593 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ca129-f896-4d68-b119-701a991fe0ba" containerName="mariadb-database-create" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681604 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681610 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.682342 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.684726 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j6c2t" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.684746 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.698672 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.823399 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.823480 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.823518 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.823641 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.924960 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.925023 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.925075 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.925127 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.929344 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.929694 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.930852 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.942436 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.003349 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.081156 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-llsph" podUID="28fe292c-6cda-4e3b-bce3-544ded95930b" containerName="ovn-controller" probeResult="failure" output=< Feb 18 19:34:06 crc kubenswrapper[4942]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 19:34:06 crc kubenswrapper[4942]: > Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.084068 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.096511 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.490708 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerStarted","Data":"2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7"} Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.491089 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.561298 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.119115986 podStartE2EDuration="1m1.561280825s" podCreationTimestamp="2026-02-18 19:33:05 +0000 UTC" firstStartedPulling="2026-02-18 19:33:20.106938004 +0000 UTC m=+959.811870669" lastFinishedPulling="2026-02-18 19:33:29.549102843 +0000 UTC m=+969.254035508" observedRunningTime="2026-02-18 19:34:06.557303343 +0000 UTC m=+1006.262236008" watchObservedRunningTime="2026-02-18 19:34:06.561280825 +0000 UTC m=+1006.266213480" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.583283 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.584495 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.587035 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.594889 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.618692 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743306 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743611 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743642 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743722 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743817 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743952 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.845969 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846026 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846107 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846179 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846480 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846488 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846552 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.847100 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.847904 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.866103 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.904422 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.171770 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.173073 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.174654 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.186275 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.253023 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.253284 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.355249 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.355441 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.356216 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.377952 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.495163 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.504878 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zw8ls" event={"ID":"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3","Type":"ContainerStarted","Data":"e983b61464f792023c5c202bd16dd9437e3b945f9e2f82c09b596638a70e9520"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:07.858454 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:08 crc kubenswrapper[4942]: W0218 19:34:07.868505 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf16fd5_4915_49f5_b08b_d1bad49cd27a.slice/crio-7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056 WatchSource:0}: Error finding container 7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056: Status 404 returned error can't find the container with id 7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056 Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.018376 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.520210 4942 generic.go:334] "Generic (PLEG): container finished" podID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" containerID="f762c8a9d2890b0c6a5aa76b7b4d8dbd055509fafd584287df55f4c0629feaed" exitCode=0 Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.520419 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trjtn" event={"ID":"22b30cc6-6022-4a4f-9911-7a47df5f2c98","Type":"ContainerDied","Data":"f762c8a9d2890b0c6a5aa76b7b4d8dbd055509fafd584287df55f4c0629feaed"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.520470 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trjtn" event={"ID":"22b30cc6-6022-4a4f-9911-7a47df5f2c98","Type":"ContainerStarted","Data":"6eaccd964fe5040d0302d45e130d879648b3963ccef106d16cd8f783846424c0"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.527188 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph-config-56xlq" event={"ID":"4cf16fd5-4915-49f5-b08b-d1bad49cd27a","Type":"ContainerStarted","Data":"7d25a210ee23b71ffe8e6422d5c4b01d726dcdfde682e5219625754a6f1f5d53"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.527221 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph-config-56xlq" event={"ID":"4cf16fd5-4915-49f5-b08b-d1bad49cd27a","Type":"ContainerStarted","Data":"7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.562034 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-llsph-config-56xlq" podStartSLOduration=2.562013479 podStartE2EDuration="2.562013479s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:08.54997323 +0000 UTC m=+1008.254905895" watchObservedRunningTime="2026-02-18 19:34:08.562013479 +0000 UTC m=+1008.266946144" Feb 18 19:34:09 crc kubenswrapper[4942]: I0218 19:34:09.541109 4942 generic.go:334] "Generic (PLEG): container finished" podID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" containerID="7d25a210ee23b71ffe8e6422d5c4b01d726dcdfde682e5219625754a6f1f5d53" exitCode=0 Feb 18 19:34:09 crc kubenswrapper[4942]: I0218 19:34:09.541252 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph-config-56xlq" event={"ID":"4cf16fd5-4915-49f5-b08b-d1bad49cd27a","Type":"ContainerDied","Data":"7d25a210ee23b71ffe8e6422d5c4b01d726dcdfde682e5219625754a6f1f5d53"} Feb 18 19:34:09 crc kubenswrapper[4942]: I0218 19:34:09.927303 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.007146 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") pod \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.007396 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") pod \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.008013 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22b30cc6-6022-4a4f-9911-7a47df5f2c98" (UID: "22b30cc6-6022-4a4f-9911-7a47df5f2c98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.019578 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7" (OuterVolumeSpecName: "kube-api-access-64zb7") pod "22b30cc6-6022-4a4f-9911-7a47df5f2c98" (UID: "22b30cc6-6022-4a4f-9911-7a47df5f2c98"). InnerVolumeSpecName "kube-api-access-64zb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.109493 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.109531 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.551834 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.553069 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trjtn" event={"ID":"22b30cc6-6022-4a4f-9911-7a47df5f2c98","Type":"ContainerDied","Data":"6eaccd964fe5040d0302d45e130d879648b3963ccef106d16cd8f783846424c0"} Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.554013 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eaccd964fe5040d0302d45e130d879648b3963ccef106d16cd8f783846424c0" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.924447 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.024564 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.024616 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.024645 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.024663 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.025646 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts" (OuterVolumeSpecName: "scripts") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026099 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026286 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026332 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026283 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run" (OuterVolumeSpecName: "var-run") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026447 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026526 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027023 4942 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027042 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027052 4942 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027062 4942 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027072 4942 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.042519 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms" (OuterVolumeSpecName: "kube-api-access-7gfms") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "kube-api-access-7gfms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.097904 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-llsph" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.128834 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.560049 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph-config-56xlq" event={"ID":"4cf16fd5-4915-49f5-b08b-d1bad49cd27a","Type":"ContainerDied","Data":"7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056"} Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.560088 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.560134 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:12 crc kubenswrapper[4942]: I0218 19:34:12.028793 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:12 crc kubenswrapper[4942]: I0218 19:34:12.035342 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.049455 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" path="/var/lib/kubelet/pods/4cf16fd5-4915-49f5-b08b-d1bad49cd27a/volumes" Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.901843 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.908285 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.977656 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.980269 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:14 crc kubenswrapper[4942]: I0218 19:34:14.585877 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:15 crc kubenswrapper[4942]: I0218 19:34:15.053728 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" path="/var/lib/kubelet/pods/22b30cc6-6022-4a4f-9911-7a47df5f2c98/volumes" Feb 18 19:34:15 crc kubenswrapper[4942]: I0218 19:34:15.606334 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:34:15 crc kubenswrapper[4942]: I0218 19:34:15.621188 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:34:15 crc kubenswrapper[4942]: I0218 19:34:15.874146 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:34:16 crc kubenswrapper[4942]: I0218 19:34:16.820237 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:16 crc kubenswrapper[4942]: I0218 19:34:16.820782 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" containerID="cri-o://19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a" gracePeriod=600 Feb 18 19:34:16 crc kubenswrapper[4942]: I0218 19:34:16.820867 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="thanos-sidecar" containerID="cri-o://fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b" gracePeriod=600 Feb 18 19:34:16 crc kubenswrapper[4942]: I0218 19:34:16.820890 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="config-reloader" containerID="cri-o://ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f" gracePeriod=600 Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.034010 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.327916 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.392668 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:34:17 crc kubenswrapper[4942]: E0218 19:34:17.393011 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" containerName="mariadb-account-create-update" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393028 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" containerName="mariadb-account-create-update" Feb 18 19:34:17 crc kubenswrapper[4942]: E0218 19:34:17.393053 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" containerName="ovn-config" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393060 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" containerName="ovn-config" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393209 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" containerName="ovn-config" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393235 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" containerName="mariadb-account-create-update" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393807 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.413608 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.455534 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.456799 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.459205 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-jp82k" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.459287 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.479348 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.549286 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.549621 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.577839 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.579470 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.583683 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.592202 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.613945 4942 generic.go:334] "Generic (PLEG): container finished" podID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerID="fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b" exitCode=0 Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.613967 4942 generic.go:334] "Generic (PLEG): container finished" podID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerID="ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f" exitCode=0 Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.613974 4942 generic.go:334] "Generic (PLEG): container finished" podID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerID="19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a" exitCode=0 Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.613993 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b"} Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.614021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f"} Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.614032 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a"} Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651562 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651615 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651663 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651712 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651808 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.652703 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.667378 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.668785 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.700478 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.705957 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.710055 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.753709 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754176 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754301 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754493 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754683 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754950 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.768229 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.771715 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.771810 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.785997 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.796260 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.797267 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.799051 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.807843 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.856540 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.857665 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.857678 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.857963 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.858093 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.876698 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.877964 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.880378 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.881928 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9szpl" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.882359 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.883452 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.888221 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.901241 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.901662 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.913173 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.920141 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.920221 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.959398 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.959464 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.959517 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.959548 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.961550 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.985929 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.022988 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.024235 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.033536 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.044413 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062814 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062881 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062906 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062963 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062997 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.063031 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.063069 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.064315 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.076593 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.109279 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.140174 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164345 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164466 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164491 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164542 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164574 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164627 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164661 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.166942 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.168577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.170524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.186261 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.193276 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.197360 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.207310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.248703 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.266074 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.266224 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.267038 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.288349 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.396796 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.979409 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.011072 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.012182 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.014629 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.028123 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.080382 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.080850 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.182273 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.182476 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.185794 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.200899 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.336580 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8f782" Feb 18 19:34:23 crc kubenswrapper[4942]: E0218 19:34:23.047708 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 18 19:34:23 crc kubenswrapper[4942]: E0218 19:34:23.047907 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb9h8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-zw8ls_openstack(72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:23 crc kubenswrapper[4942]: E0218 19:34:23.049624 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-zw8ls" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.571886 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.665246 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.674119 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"1193c3f2b445b73f045913a6f677cad12654f417ef42c816b25977d36d83acd7"} Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.674186 4942 scope.go:117] "RemoveContainer" containerID="fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.674210 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:23 crc kubenswrapper[4942]: E0218 19:34:23.675080 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-zw8ls" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.678225 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.681326 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682447 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682522 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682556 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682600 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682651 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682708 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682745 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.683116 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.683170 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.683942 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.684420 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.685159 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.687871 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr" (OuterVolumeSpecName: "kube-api-access-6pvjr") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "kube-api-access-6pvjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.688300 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.691804 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config" (OuterVolumeSpecName: "config") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.703305 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out" (OuterVolumeSpecName: "config-out") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.703488 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.703981 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.705736 4942 scope.go:117] "RemoveContainer" containerID="ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.708109 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.711916 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.726435 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config" (OuterVolumeSpecName: "web-config") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.741218 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.741269 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.746189 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.762422 4942 scope.go:117] "RemoveContainer" containerID="19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786393 4942 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786420 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786430 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786453 4942 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786465 4942 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786475 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786483 4942 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786493 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786515 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") on node \"crc\" " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786526 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.820405 4942 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.820523 4942 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5") on node "crc" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.829088 4942 scope.go:117] "RemoveContainer" containerID="81a3193c7a82e4ed4f2a5322d29f8d82024b97bad905eacfd10f035fcf65ddf4" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.882945 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.887692 4942 reconciler_common.go:293] "Volume detached for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.913754 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:34:23 crc kubenswrapper[4942]: W0218 19:34:23.919075 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd491cd9_f58f_4821_8004_a5a4762d6bdb.slice/crio-15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1 WatchSource:0}: Error finding container 15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1: Status 404 returned error can't find the container with id 15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1 Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.936873 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:34:23 crc kubenswrapper[4942]: W0218 19:34:23.963175 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc903d652_2880_43bd_9445_f1b03764f413.slice/crio-eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727 WatchSource:0}: Error finding container eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727: Status 404 returned error can't find the container with id eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.026092 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.098936 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.163268 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.202229 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.205727 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.261923 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.275678 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.316439 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: E0218 19:34:24.317036 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317048 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" Feb 18 19:34:24 crc kubenswrapper[4942]: E0218 19:34:24.317125 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="init-config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317132 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="init-config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: E0218 19:34:24.317143 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317148 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: E0218 19:34:24.317157 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="thanos-sidecar" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317163 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="thanos-sidecar" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317328 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317340 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317350 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="thanos-sidecar" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.318862 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.322552 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.322901 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328162 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7f4m2" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328246 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328307 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328500 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328516 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328640 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.338293 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.356639 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.400300 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404653 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404927 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404956 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404977 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404997 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405033 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405081 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405102 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405125 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405154 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405173 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405206 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405262 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507010 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507067 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507095 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507115 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507154 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507188 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507241 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507279 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507316 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507342 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507378 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.508124 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.509423 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.509877 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.513651 4942 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.513681 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70b345b463ff13ff33bce45da0f4a8796a1574afa2d8fd2ecf4f2239b34767fb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.515447 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.519899 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.520035 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.521094 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.522871 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.525417 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.525963 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.526060 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.526611 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.586639 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.645528 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.695265 4942 generic.go:334] "Generic (PLEG): container finished" podID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" containerID="811ec8cee78f943aac4bbfb29b95ea4e9d51e51453fc9da48c7eabb6372bfb2b" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.695323 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8f782" event={"ID":"4edc6296-1ba6-43f7-a076-93f94c77a2c9","Type":"ContainerDied","Data":"811ec8cee78f943aac4bbfb29b95ea4e9d51e51453fc9da48c7eabb6372bfb2b"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.695355 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8f782" event={"ID":"4edc6296-1ba6-43f7-a076-93f94c77a2c9","Type":"ContainerStarted","Data":"b99ac8b869534b1562557fd7d216264bc451f958990baf05beabf148b59e05dd"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.698787 4942 generic.go:334] "Generic (PLEG): container finished" podID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" containerID="b1d49648de6b3a759e8404975f38b8d6b28e2ed6cf3c88b12649b6a3fed64a43" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.698857 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zlhp" event={"ID":"9a8e424f-44a5-4eaa-9f3f-882f070aa404","Type":"ContainerDied","Data":"b1d49648de6b3a759e8404975f38b8d6b28e2ed6cf3c88b12649b6a3fed64a43"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.698978 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zlhp" event={"ID":"9a8e424f-44a5-4eaa-9f3f-882f070aa404","Type":"ContainerStarted","Data":"782c7169a527587f0df4ddf04bca08ff30f9b37d9fbb836be2b06d269c8af331"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.700789 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s54gq" event={"ID":"fd491cd9-f58f-4821-8004-a5a4762d6bdb","Type":"ContainerStarted","Data":"55245bf67e01b4a9996ff8822e688651e94d412e130a306f9914243a723acae1"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.700824 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s54gq" event={"ID":"fd491cd9-f58f-4821-8004-a5a4762d6bdb","Type":"ContainerStarted","Data":"15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.708831 4942 generic.go:334] "Generic (PLEG): container finished" podID="fcea68e2-0d37-4812-a7ad-403e59b7b556" containerID="0e02d4fe73a4e293f62bf869926c2629a47060f29d5a8a14b093d650895a851c" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.708907 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e916-account-create-update-lm2r5" event={"ID":"fcea68e2-0d37-4812-a7ad-403e59b7b556","Type":"ContainerDied","Data":"0e02d4fe73a4e293f62bf869926c2629a47060f29d5a8a14b093d650895a851c"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.708936 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e916-account-create-update-lm2r5" event={"ID":"fcea68e2-0d37-4812-a7ad-403e59b7b556","Type":"ContainerStarted","Data":"c1d0729b4bd7253e74f4b6ab5454c70366e85e881a46ee7d2e977d6d54e404bc"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.713537 4942 generic.go:334] "Generic (PLEG): container finished" podID="dddbc305-d881-4ef9-ada1-49e8f180162c" containerID="727dde1e275a9b0b467f516dab63cba62b27e6168562e7bbd076fe7b30b2869f" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.713611 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-njfd6" event={"ID":"dddbc305-d881-4ef9-ada1-49e8f180162c","Type":"ContainerDied","Data":"727dde1e275a9b0b467f516dab63cba62b27e6168562e7bbd076fe7b30b2869f"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.713637 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-njfd6" event={"ID":"dddbc305-d881-4ef9-ada1-49e8f180162c","Type":"ContainerStarted","Data":"49c2a64763568347502e4187d0859e00063ef22d9d7344a3f0903d0addb05807"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.717111 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fee6-account-create-update-jhlbn" event={"ID":"c903d652-2880-43bd-9445-f1b03764f413","Type":"ContainerStarted","Data":"e0735df4037c9d26aa2f69d57c8e775cb7c18bc1fdb68127c0b914f822f83bec"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.717144 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fee6-account-create-update-jhlbn" event={"ID":"c903d652-2880-43bd-9445-f1b03764f413","Type":"ContainerStarted","Data":"eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.723106 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4h9n5" event={"ID":"983d5293-8413-4a29-88b2-ba775b3b4a8b","Type":"ContainerStarted","Data":"4d89390c95728bcf123b54a9e3391d1834069387fcdf07d8c1f1a0845cb094b5"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.730324 4942 generic.go:334] "Generic (PLEG): container finished" podID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" containerID="a8c3861121c5594ca501846681ea609d414d4c26e10e1b891f8ff728174138b2" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.730396 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f862-account-create-update-29qlq" event={"ID":"35dbdf24-b5f9-4a19-96f9-1fe390df90e1","Type":"ContainerDied","Data":"a8c3861121c5594ca501846681ea609d414d4c26e10e1b891f8ff728174138b2"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.730424 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f862-account-create-update-29qlq" event={"ID":"35dbdf24-b5f9-4a19-96f9-1fe390df90e1","Type":"ContainerStarted","Data":"81ed1e2a309c2b32082391ca65190ea386781edda62314bc4f655da6fdbe708c"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.744116 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87p82" event={"ID":"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5","Type":"ContainerStarted","Data":"503b84b004f829c8689047881a5f94617e83302d763c11ea9186e35169366871"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.749787 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"85cdef38bdb1a5a0192a5ed4d12d7a5edfcde5f23ee43b27270ae3f89c1d09de"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.796958 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-fee6-account-create-update-jhlbn" podStartSLOduration=7.796937105 podStartE2EDuration="7.796937105s" podCreationTimestamp="2026-02-18 19:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:24.791843614 +0000 UTC m=+1024.496776279" watchObservedRunningTime="2026-02-18 19:34:24.796937105 +0000 UTC m=+1024.501869770" Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.053695 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" path="/var/lib/kubelet/pods/543db3d4-08d8-473f-a6ad-7e6a5bb9734c/volumes" Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.141010 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.764657 4942 generic.go:334] "Generic (PLEG): container finished" podID="c903d652-2880-43bd-9445-f1b03764f413" containerID="e0735df4037c9d26aa2f69d57c8e775cb7c18bc1fdb68127c0b914f822f83bec" exitCode=0 Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.764776 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fee6-account-create-update-jhlbn" event={"ID":"c903d652-2880-43bd-9445-f1b03764f413","Type":"ContainerDied","Data":"e0735df4037c9d26aa2f69d57c8e775cb7c18bc1fdb68127c0b914f822f83bec"} Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.771361 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s54gq" event={"ID":"fd491cd9-f58f-4821-8004-a5a4762d6bdb","Type":"ContainerDied","Data":"55245bf67e01b4a9996ff8822e688651e94d412e130a306f9914243a723acae1"} Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.773967 4942 generic.go:334] "Generic (PLEG): container finished" podID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" containerID="55245bf67e01b4a9996ff8822e688651e94d412e130a306f9914243a723acae1" exitCode=0 Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.776554 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"60c6687648dd41b94a4225ed03866cf4c665cec18c0eb5d84fcb09f0dbc7012b"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.459210 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.472215 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.481470 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.489365 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.497954 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8f782" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.583805 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") pod \"fcea68e2-0d37-4812-a7ad-403e59b7b556\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.583860 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") pod \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.583949 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") pod \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.583984 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") pod \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584048 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") pod \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584066 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") pod \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584103 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") pod \"fcea68e2-0d37-4812-a7ad-403e59b7b556\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584125 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") pod \"dddbc305-d881-4ef9-ada1-49e8f180162c\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584146 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") pod \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584224 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") pod \"dddbc305-d881-4ef9-ada1-49e8f180162c\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584671 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fcea68e2-0d37-4812-a7ad-403e59b7b556" (UID: "fcea68e2-0d37-4812-a7ad-403e59b7b556"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584720 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4edc6296-1ba6-43f7-a076-93f94c77a2c9" (UID: "4edc6296-1ba6-43f7-a076-93f94c77a2c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dddbc305-d881-4ef9-ada1-49e8f180162c" (UID: "dddbc305-d881-4ef9-ada1-49e8f180162c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.585450 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35dbdf24-b5f9-4a19-96f9-1fe390df90e1" (UID: "35dbdf24-b5f9-4a19-96f9-1fe390df90e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.585515 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a8e424f-44a5-4eaa-9f3f-882f070aa404" (UID: "9a8e424f-44a5-4eaa-9f3f-882f070aa404"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.589932 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8" (OuterVolumeSpecName: "kube-api-access-kmdm8") pod "fcea68e2-0d37-4812-a7ad-403e59b7b556" (UID: "fcea68e2-0d37-4812-a7ad-403e59b7b556"). InnerVolumeSpecName "kube-api-access-kmdm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.590666 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw" (OuterVolumeSpecName: "kube-api-access-q5lhw") pod "9a8e424f-44a5-4eaa-9f3f-882f070aa404" (UID: "9a8e424f-44a5-4eaa-9f3f-882f070aa404"). InnerVolumeSpecName "kube-api-access-q5lhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.665970 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn" (OuterVolumeSpecName: "kube-api-access-zdmfn") pod "35dbdf24-b5f9-4a19-96f9-1fe390df90e1" (UID: "35dbdf24-b5f9-4a19-96f9-1fe390df90e1"). InnerVolumeSpecName "kube-api-access-zdmfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.675329 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf" (OuterVolumeSpecName: "kube-api-access-cq5mf") pod "4edc6296-1ba6-43f7-a076-93f94c77a2c9" (UID: "4edc6296-1ba6-43f7-a076-93f94c77a2c9"). InnerVolumeSpecName "kube-api-access-cq5mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.676450 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc" (OuterVolumeSpecName: "kube-api-access-zvwzc") pod "dddbc305-d881-4ef9-ada1-49e8f180162c" (UID: "dddbc305-d881-4ef9-ada1-49e8f180162c"). InnerVolumeSpecName "kube-api-access-zvwzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686667 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686709 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686728 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686746 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686784 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686852 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686870 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686886 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686902 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686919 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.791607 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zlhp" event={"ID":"9a8e424f-44a5-4eaa-9f3f-882f070aa404","Type":"ContainerDied","Data":"782c7169a527587f0df4ddf04bca08ff30f9b37d9fbb836be2b06d269c8af331"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.791648 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782c7169a527587f0df4ddf04bca08ff30f9b37d9fbb836be2b06d269c8af331" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.791647 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.804584 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f862-account-create-update-29qlq" event={"ID":"35dbdf24-b5f9-4a19-96f9-1fe390df90e1","Type":"ContainerDied","Data":"81ed1e2a309c2b32082391ca65190ea386781edda62314bc4f655da6fdbe708c"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.804610 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.804623 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ed1e2a309c2b32082391ca65190ea386781edda62314bc4f655da6fdbe708c" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.806478 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e916-account-create-update-lm2r5" event={"ID":"fcea68e2-0d37-4812-a7ad-403e59b7b556","Type":"ContainerDied","Data":"c1d0729b4bd7253e74f4b6ab5454c70366e85e881a46ee7d2e977d6d54e404bc"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.806503 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1d0729b4bd7253e74f4b6ab5454c70366e85e881a46ee7d2e977d6d54e404bc" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.806521 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.808138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-njfd6" event={"ID":"dddbc305-d881-4ef9-ada1-49e8f180162c","Type":"ContainerDied","Data":"49c2a64763568347502e4187d0859e00063ef22d9d7344a3f0903d0addb05807"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.808208 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49c2a64763568347502e4187d0859e00063ef22d9d7344a3f0903d0addb05807" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.808229 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.810599 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8f782" event={"ID":"4edc6296-1ba6-43f7-a076-93f94c77a2c9","Type":"ContainerDied","Data":"b99ac8b869534b1562557fd7d216264bc451f958990baf05beabf148b59e05dd"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.810629 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b99ac8b869534b1562557fd7d216264bc451f958990baf05beabf148b59e05dd" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.810648 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8f782" Feb 18 19:34:28 crc kubenswrapper[4942]: I0218 19:34:28.822076 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067"} Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.848255 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fee6-account-create-update-jhlbn" event={"ID":"c903d652-2880-43bd-9445-f1b03764f413","Type":"ContainerDied","Data":"eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727"} Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.848574 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.851317 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s54gq" event={"ID":"fd491cd9-f58f-4821-8004-a5a4762d6bdb","Type":"ContainerDied","Data":"15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1"} Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.851343 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.856073 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.866526 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.959579 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") pod \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.959844 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") pod \"c903d652-2880-43bd-9445-f1b03764f413\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.960053 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") pod \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.960586 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") pod \"c903d652-2880-43bd-9445-f1b03764f413\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.960427 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c903d652-2880-43bd-9445-f1b03764f413" (UID: "c903d652-2880-43bd-9445-f1b03764f413"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.960488 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd491cd9-f58f-4821-8004-a5a4762d6bdb" (UID: "fd491cd9-f58f-4821-8004-a5a4762d6bdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.961578 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.961667 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.965185 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t" (OuterVolumeSpecName: "kube-api-access-5fl2t") pod "fd491cd9-f58f-4821-8004-a5a4762d6bdb" (UID: "fd491cd9-f58f-4821-8004-a5a4762d6bdb"). InnerVolumeSpecName "kube-api-access-5fl2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.965510 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4" (OuterVolumeSpecName: "kube-api-access-nshl4") pod "c903d652-2880-43bd-9445-f1b03764f413" (UID: "c903d652-2880-43bd-9445-f1b03764f413"). InnerVolumeSpecName "kube-api-access-nshl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:31 crc kubenswrapper[4942]: I0218 19:34:31.062988 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:31 crc kubenswrapper[4942]: I0218 19:34:31.063259 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:31 crc kubenswrapper[4942]: I0218 19:34:31.860489 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:31 crc kubenswrapper[4942]: I0218 19:34:31.860522 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:34 crc kubenswrapper[4942]: I0218 19:34:34.894639 4942 generic.go:334] "Generic (PLEG): container finished" podID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerID="7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067" exitCode=0 Feb 18 19:34:34 crc kubenswrapper[4942]: I0218 19:34:34.894751 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067"} Feb 18 19:34:43 crc kubenswrapper[4942]: E0218 19:34:43.000234 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.12:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 19:34:43 crc kubenswrapper[4942]: E0218 19:34:43.000597 4942 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.12:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 19:34:43 crc kubenswrapper[4942]: E0218 19:34:43.000709 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.12:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfbhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-4h9n5_openstack(983d5293-8413-4a29-88b2-ba775b3b4a8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:43 crc kubenswrapper[4942]: E0218 19:34:43.001911 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-4h9n5" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" Feb 18 19:34:43 crc kubenswrapper[4942]: I0218 19:34:43.992020 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zw8ls" event={"ID":"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3","Type":"ContainerStarted","Data":"8c6545f8eaa3b666b06d888c16ee9caa900adcec0bcd683e72e4f96180bd297d"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.003166 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87p82" event={"ID":"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5","Type":"ContainerStarted","Data":"373bd2d7e6e62cf5defbed6522169de2de3264581e7024f113223b1465d241c5"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.007102 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"cf1f1da32f81e24045adc2bc49f551d88ed9f8d3b07c88459a6652b111442fd0"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.007143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"9311d8502dc45863220a161c11f863317f5924befd91b5142728d84955c095bd"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.007153 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"bd26707859e9dcbcf2d0119ae331daae4b638ae2c767bd5fec59453b44e050ae"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.007162 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"993c8c60fd3b39f8775403a8dd8bc1d8168b5636a056b78d38e7028ed5ec9139"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.009301 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845"} Feb 18 19:34:44 crc kubenswrapper[4942]: E0218 19:34:44.011024 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.12:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-4h9n5" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.021062 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-zw8ls" podStartSLOduration=2.620893133 podStartE2EDuration="39.021044111s" podCreationTimestamp="2026-02-18 19:34:05 +0000 UTC" firstStartedPulling="2026-02-18 19:34:06.637294534 +0000 UTC m=+1006.342227199" lastFinishedPulling="2026-02-18 19:34:43.037445512 +0000 UTC m=+1042.742378177" observedRunningTime="2026-02-18 19:34:44.02062406 +0000 UTC m=+1043.725556735" watchObservedRunningTime="2026-02-18 19:34:44.021044111 +0000 UTC m=+1043.725976776" Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.085945 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-87p82" podStartSLOduration=8.295739952 podStartE2EDuration="27.085920687s" podCreationTimestamp="2026-02-18 19:34:17 +0000 UTC" firstStartedPulling="2026-02-18 19:34:24.160238045 +0000 UTC m=+1023.865170710" lastFinishedPulling="2026-02-18 19:34:42.95041877 +0000 UTC m=+1042.655351445" observedRunningTime="2026-02-18 19:34:44.069957732 +0000 UTC m=+1043.774890387" watchObservedRunningTime="2026-02-18 19:34:44.085920687 +0000 UTC m=+1043.790853342" Feb 18 19:34:46 crc kubenswrapper[4942]: I0218 19:34:46.031028 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"f5c0c2e09a2d8742eb8265dd456d96d2d3f5e3e4eade37d79f317982057f5219"} Feb 18 19:34:46 crc kubenswrapper[4942]: I0218 19:34:46.031512 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"92c71722c3a69153f783c6fc61267ad29332d27b46fbbddb7d510351acdc5d7d"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.038812 4942 generic.go:334] "Generic (PLEG): container finished" podID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" containerID="373bd2d7e6e62cf5defbed6522169de2de3264581e7024f113223b1465d241c5" exitCode=0 Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.045954 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87p82" event={"ID":"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5","Type":"ContainerDied","Data":"373bd2d7e6e62cf5defbed6522169de2de3264581e7024f113223b1465d241c5"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.046005 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"e916eefff7399466ea313f01b28d972a7abc4a8738eb47fdc513b3228b3584fc"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.046019 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"31971173ad26ea7c521fbde21c7957bb0b54e8c3c4403538b81a2957f17c3ea2"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.046031 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.046044 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.107591 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.107570353 podStartE2EDuration="23.107570353s" podCreationTimestamp="2026-02-18 19:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:47.077732077 +0000 UTC m=+1046.782664792" watchObservedRunningTime="2026-02-18 19:34:47.107570353 +0000 UTC m=+1046.812503038" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.062222 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"08b2c386f0f91e534946abe8e67778681100e9eeb1394a2babb8c6b780a74954"} Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.459625 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.604444 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") pod \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.604578 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") pod \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.604713 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") pod \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.609237 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x" (OuterVolumeSpecName: "kube-api-access-f4b9x") pod "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" (UID: "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5"). InnerVolumeSpecName "kube-api-access-f4b9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.632777 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" (UID: "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.652559 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data" (OuterVolumeSpecName: "config-data") pod "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" (UID: "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.709012 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.709040 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.709049 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.068679 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87p82" event={"ID":"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5","Type":"ContainerDied","Data":"503b84b004f829c8689047881a5f94617e83302d763c11ea9186e35169366871"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.068731 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503b84b004f829c8689047881a5f94617e83302d763c11ea9186e35169366871" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.068697 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077924 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"2316229b46c142d52d046e76042a6420d0bd812349a87939b21cd4cc7fe128fa"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077968 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"714f124696e0ff58d3540491ec20f425179bcc1b2713feed3e5e74302085bc84"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077980 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"f15078d7065d6bd0395d8455dba0a45032b03213e63cad816baa689eba81e9bc"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077988 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"f3ff715f94be1a74557b976fd4825fabb666c14abf91066881f881170e5835ba"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077998 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"cdd2348a1592a89a97373234a5f964782a66ea575b8d5a650dbdc5d95f27c3bd"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409125 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409534 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcea68e2-0d37-4812-a7ad-403e59b7b556" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409554 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcea68e2-0d37-4812-a7ad-403e59b7b556" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409572 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409579 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409595 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c903d652-2880-43bd-9445-f1b03764f413" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409601 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="c903d652-2880-43bd-9445-f1b03764f413" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409610 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409616 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409624 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" containerName="keystone-db-sync" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409629 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" containerName="keystone-db-sync" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409640 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409647 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409659 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dddbc305-d881-4ef9-ada1-49e8f180162c" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409667 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dddbc305-d881-4ef9-ada1-49e8f180162c" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409675 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409681 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409893 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409914 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcea68e2-0d37-4812-a7ad-403e59b7b556" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409923 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="c903d652-2880-43bd-9445-f1b03764f413" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409937 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409959 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dddbc305-d881-4ef9-ada1-49e8f180162c" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409969 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" containerName="keystone-db-sync" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409984 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409995 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.410610 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.413008 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.414291 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.414613 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.414919 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9szpl" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.415234 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.418141 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.422628 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.437009 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.455017 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525058 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525101 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525160 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525177 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525202 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525234 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525250 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525268 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525296 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525309 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525334 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.563468 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.564609 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.573234 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.573264 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pc4kw" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.574292 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.588250 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626562 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626634 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626669 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626689 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626776 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.627168 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.627327 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.627422 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.627509 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.628551 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.628632 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.629434 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.631778 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.631972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.633568 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.633846 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.640906 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.642269 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.645835 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.692801 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.694626 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.707600 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.708581 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.728256 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-j2dt6" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.728674 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.728881 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.729077 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.732656 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.732883 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.732962 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.745503 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.751854 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.827182 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.837195 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.837313 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.837340 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.856151 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.902395 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.939552 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949090 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949170 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949204 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949256 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.972716 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.973896 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.005452 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.011015 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.011211 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.012126 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rhdz8" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.037159 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.038499 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050785 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050820 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050845 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050868 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050885 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050910 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050927 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050994 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.051013 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.051039 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.051463 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.053916 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.054007 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.061749 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.061935 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.062110 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z4q86" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.066408 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.073818 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.095511 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.122463 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"040a6c9a84d4f13ad2cfb74bb2fba17bf1179819d6e678b47e5738572d85436f"} Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.128856 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.136920 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.137884 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.138335 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.143213 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.143393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qg5fj" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154005 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154318 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154543 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154627 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154723 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154843 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154937 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.155056 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.155145 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.155239 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.159350 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.164055 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.177587 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.180331 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.182368 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.185911 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.187224 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.202048 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.204264 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.205911 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.235344 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.243832 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.245784 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.258870 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.258913 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.258979 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259053 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259152 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259261 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259359 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259439 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.260859 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.266067 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.270609 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.274021 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.282104 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.284945 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.287639 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.294591 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.296826 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.297033 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.359560 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.362913 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.362970 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363032 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363060 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363109 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363146 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363234 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363353 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363393 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363499 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363559 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363591 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363673 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363752 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363808 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363832 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363864 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363884 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363903 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.368329 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.142573986 podStartE2EDuration="1m8.368313153s" podCreationTimestamp="2026-02-18 19:33:42 +0000 UTC" firstStartedPulling="2026-02-18 19:34:24.42636145 +0000 UTC m=+1024.131294115" lastFinishedPulling="2026-02-18 19:34:47.652100627 +0000 UTC m=+1047.357033282" observedRunningTime="2026-02-18 19:34:50.207412491 +0000 UTC m=+1049.912345156" watchObservedRunningTime="2026-02-18 19:34:50.368313153 +0000 UTC m=+1050.073245818" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.374115 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.390397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.401012 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.418296 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.435867 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470680 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470731 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470752 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470805 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470825 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470859 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470873 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470891 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470923 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470965 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470980 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470996 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471015 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471032 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471065 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471095 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471731 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.472323 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.472814 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.474775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.475704 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.476214 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.479081 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.479597 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.479846 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.480154 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.497312 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.506279 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.507050 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.518182 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.519785 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.525038 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.531020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.534158 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.542465 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.544382 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.580637 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.583284 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.587223 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.594258 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677587 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677668 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677701 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677789 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677913 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.757497 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.779608 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.779688 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.779736 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.780168 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.780192 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.780247 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.781352 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.781804 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.781937 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.782612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.782693 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.803171 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.818384 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.832823 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.844353 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.926693 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.012570 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.031249 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.069112 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.158822 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p9l27" event={"ID":"a6c912f7-7ee8-4f53-a358-a6a6a5088be5","Type":"ContainerStarted","Data":"316d5107b8b347fd0cea3be7273208da7013d9d15ad9e9d0440db47bc1ed0d8e"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.161505 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" event={"ID":"0e907b66-eaef-489a-b729-f61f0c7e347d","Type":"ContainerStarted","Data":"43ab5328f956e2ed08ffaa0187dd014311f455b02df0f837b7a002e208528e41"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.163733 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9ntpw" event={"ID":"af8e769c-00c3-41a1-97c4-d91902767dfe","Type":"ContainerStarted","Data":"eb7a8e3a23f3477cac51aacb10a95d5378f6772c63aae9e96752efd516b0a2a1"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.166646 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6487999dc5-x92k5" event={"ID":"c4f4df56-7f3e-490d-9321-dc520b65369a","Type":"ContainerStarted","Data":"ac381e3f114e8f2e0ca2ad49412144e5bd5345aa14e469a41eeec38b75b61e1c"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.173073 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wknkh" event={"ID":"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0","Type":"ContainerStarted","Data":"262290f48bc9f52e9ad2af485330819793bdd52215504ccea4c7c0b79cc77dac"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.195901 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.277994 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:34:51 crc kubenswrapper[4942]: W0218 19:34:51.297879 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf152879a_9670_449a_be9f_d3314368e29c.slice/crio-e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6 WatchSource:0}: Error finding container e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6: Status 404 returned error can't find the container with id e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6 Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.307279 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.316825 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.407382 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:34:51 crc kubenswrapper[4942]: W0218 19:34:51.419633 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf354be6c_0a53_41b2_923d_60de99a6ed65.slice/crio-d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014 WatchSource:0}: Error finding container d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014: Status 404 returned error can't find the container with id d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014 Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.528851 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:34:51 crc kubenswrapper[4942]: W0218 19:34:51.545858 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79f6285d_991e_4118_8f5b_d451c225f1d6.slice/crio-f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89 WatchSource:0}: Error finding container f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89: Status 404 returned error can't find the container with id f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89 Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.189802 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wknkh" event={"ID":"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0","Type":"ContainerStarted","Data":"fa114cb799909584016955a551d4df04e20f11df9588933ed8a958c11cc58031"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.194076 4942 generic.go:334] "Generic (PLEG): container finished" podID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerID="64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d" exitCode=0 Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.194130 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerDied","Data":"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.194148 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerStarted","Data":"d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.197833 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p9l27" event={"ID":"a6c912f7-7ee8-4f53-a358-a6a6a5088be5","Type":"ContainerStarted","Data":"1f69a1fd29ab925cd8cf8e9aff116531b62f274c86f6998747eb096250393ed9"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.200555 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qvzh5" event={"ID":"8db7f68b-a733-44fc-90b9-a1dd489fb42d","Type":"ContainerStarted","Data":"e6bd17d6977af834a72bbf74bee36179b26553390413854446805a67a2e12afa"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.202024 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"6813065f5777b4af8dd89f8c25333785bb85a450b21a1a7ab93d214ca1b8049c"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.207487 4942 generic.go:334] "Generic (PLEG): container finished" podID="0e907b66-eaef-489a-b729-f61f0c7e347d" containerID="0d67f368ec724e01a3830704823ce44b7d34d87d57cad7e2696b5373ea79d251" exitCode=0 Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.207555 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" event={"ID":"0e907b66-eaef-489a-b729-f61f0c7e347d","Type":"ContainerDied","Data":"0d67f368ec724e01a3830704823ce44b7d34d87d57cad7e2696b5373ea79d251"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.208127 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wknkh" podStartSLOduration=3.208117437 podStartE2EDuration="3.208117437s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:52.207913562 +0000 UTC m=+1051.912846247" watchObservedRunningTime="2026-02-18 19:34:52.208117437 +0000 UTC m=+1051.913050102" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.213263 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2kjs" event={"ID":"8aeac097-ba93-4859-a14f-839ae1421e28","Type":"ContainerStarted","Data":"e12d1b9fecda9ebe7bb6c836765d71cc803f359fe9c297ce1d8263fb74f3fe1c"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.222827 4942 generic.go:334] "Generic (PLEG): container finished" podID="f152879a-9670-449a-be9f-d3314368e29c" containerID="7873d578054ec79fc1afaa80065023d0a5361c0b9d7456f037b28f5f4424be84" exitCode=0 Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.222888 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" event={"ID":"f152879a-9670-449a-be9f-d3314368e29c","Type":"ContainerDied","Data":"7873d578054ec79fc1afaa80065023d0a5361c0b9d7456f037b28f5f4424be84"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.222912 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" event={"ID":"f152879a-9670-449a-be9f-d3314368e29c","Type":"ContainerStarted","Data":"e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.231298 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dcf8ff489-qc7h7" event={"ID":"79f6285d-991e-4118-8f5b-d451c225f1d6","Type":"ContainerStarted","Data":"f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.292243 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-p9l27" podStartSLOduration=3.292226653 podStartE2EDuration="3.292226653s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:52.28478302 +0000 UTC m=+1051.989715685" watchObservedRunningTime="2026-02-18 19:34:52.292226653 +0000 UTC m=+1051.997159318" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.497805 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.546840 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.548357 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.555641 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.643724 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.643794 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.643860 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.644651 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.644715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.738096 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748413 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748461 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748487 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748552 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748579 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.749617 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.749862 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.750396 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.768709 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.774288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.860685 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.883656 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.929274 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962572 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962698 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962743 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962816 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962883 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.972938 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd" (OuterVolumeSpecName: "kube-api-access-jfbpd") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "kube-api-access-jfbpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.987051 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config" (OuterVolumeSpecName: "config") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.989232 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.012705 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.014468 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065042 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065171 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065199 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065267 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065427 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065884 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065901 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065910 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065918 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065926 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.079192 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc" (OuterVolumeSpecName: "kube-api-access-sjzkc") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "kube-api-access-sjzkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.096324 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.096404 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config" (OuterVolumeSpecName: "config") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.103670 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.109513 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173069 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173369 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173397 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173409 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173436 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.247581 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.248337 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" event={"ID":"0e907b66-eaef-489a-b729-f61f0c7e347d","Type":"ContainerDied","Data":"43ab5328f956e2ed08ffaa0187dd014311f455b02df0f837b7a002e208528e41"} Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.248369 4942 scope.go:117] "RemoveContainer" containerID="0d67f368ec724e01a3830704823ce44b7d34d87d57cad7e2696b5373ea79d251" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.251478 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" event={"ID":"f152879a-9670-449a-be9f-d3314368e29c","Type":"ContainerDied","Data":"e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6"} Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.251549 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.255152 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerStarted","Data":"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34"} Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.255565 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.311711 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" podStartSLOduration=3.311694024 podStartE2EDuration="3.311694024s" podCreationTimestamp="2026-02-18 19:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:53.289144078 +0000 UTC m=+1052.994076763" watchObservedRunningTime="2026-02-18 19:34:53.311694024 +0000 UTC m=+1053.016626689" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.352270 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.364814 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.409132 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.420172 4942 scope.go:117] "RemoveContainer" containerID="7873d578054ec79fc1afaa80065023d0a5361c0b9d7456f037b28f5f4424be84" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.428063 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.445866 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.741351 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.741406 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.741449 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.742250 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.742308 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b" gracePeriod=600 Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.276437 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b" exitCode=0 Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.276518 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b"} Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.276722 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66"} Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.276739 4942 scope.go:117] "RemoveContainer" containerID="573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a" Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.280549 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6d8bb5d5-5l49m" event={"ID":"29a05f17-8ada-451f-8460-887a45caa4e6","Type":"ContainerStarted","Data":"f9f400d74dcc827f603d02436cc05b6b30e0d9e44bb3117a942b80c1685b87ee"} Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.296600 4942 generic.go:334] "Generic (PLEG): container finished" podID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" containerID="8c6545f8eaa3b666b06d888c16ee9caa900adcec0bcd683e72e4f96180bd297d" exitCode=0 Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.296690 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zw8ls" event={"ID":"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3","Type":"ContainerDied","Data":"8c6545f8eaa3b666b06d888c16ee9caa900adcec0bcd683e72e4f96180bd297d"} Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.646582 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.661269 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:55 crc kubenswrapper[4942]: I0218 19:34:55.051341 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e907b66-eaef-489a-b729-f61f0c7e347d" path="/var/lib/kubelet/pods/0e907b66-eaef-489a-b729-f61f0c7e347d/volumes" Feb 18 19:34:55 crc kubenswrapper[4942]: I0218 19:34:55.052453 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f152879a-9670-449a-be9f-d3314368e29c" path="/var/lib/kubelet/pods/f152879a-9670-449a-be9f-d3314368e29c/volumes" Feb 18 19:34:55 crc kubenswrapper[4942]: I0218 19:34:55.332610 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:56 crc kubenswrapper[4942]: I0218 19:34:56.346816 4942 generic.go:334] "Generic (PLEG): container finished" podID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" containerID="fa114cb799909584016955a551d4df04e20f11df9588933ed8a958c11cc58031" exitCode=0 Feb 18 19:34:56 crc kubenswrapper[4942]: I0218 19:34:56.346989 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wknkh" event={"ID":"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0","Type":"ContainerDied","Data":"fa114cb799909584016955a551d4df04e20f11df9588933ed8a958c11cc58031"} Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.163166 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.197852 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:34:59 crc kubenswrapper[4942]: E0218 19:34:59.198173 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e907b66-eaef-489a-b729-f61f0c7e347d" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.198187 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e907b66-eaef-489a-b729-f61f0c7e347d" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: E0218 19:34:59.198222 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f152879a-9670-449a-be9f-d3314368e29c" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.198232 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f152879a-9670-449a-be9f-d3314368e29c" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.198402 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e907b66-eaef-489a-b729-f61f0c7e347d" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.198426 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f152879a-9670-449a-be9f-d3314368e29c" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.199265 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.202818 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.211105 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224056 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224160 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224221 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224259 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224337 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224393 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224423 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.247349 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.281098 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b6b6597b8-m8ngr"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.282700 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.289487 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b6b6597b8-m8ngr"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326089 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-secret-key\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326157 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-combined-ca-bundle\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326256 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326292 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55d24776-2d1c-413a-8ba1-06cdadf63d04-logs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326391 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326425 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326441 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-tls-certs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326476 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-scripts\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326507 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326524 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8hzt\" (UniqueName: \"kubernetes.io/projected/55d24776-2d1c-413a-8ba1-06cdadf63d04-kube-api-access-j8hzt\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326694 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326780 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-config-data\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326815 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.328937 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.329544 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.330291 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.332584 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.333087 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.340789 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.345923 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428412 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-config-data\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428501 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-secret-key\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428540 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-combined-ca-bundle\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428568 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55d24776-2d1c-413a-8ba1-06cdadf63d04-logs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428632 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-tls-certs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-scripts\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428696 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8hzt\" (UniqueName: \"kubernetes.io/projected/55d24776-2d1c-413a-8ba1-06cdadf63d04-kube-api-access-j8hzt\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.429471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55d24776-2d1c-413a-8ba1-06cdadf63d04-logs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.430180 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-scripts\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.431972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-config-data\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.433024 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-secret-key\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.434392 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-combined-ca-bundle\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.434420 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-tls-certs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.456248 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8hzt\" (UniqueName: \"kubernetes.io/projected/55d24776-2d1c-413a-8ba1-06cdadf63d04-kube-api-access-j8hzt\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.526419 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.614413 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:00 crc kubenswrapper[4942]: I0218 19:35:00.927952 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:35:00 crc kubenswrapper[4942]: I0218 19:35:00.995189 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:35:00 crc kubenswrapper[4942]: I0218 19:35:00.995507 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" containerID="cri-o://c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7" gracePeriod=10 Feb 18 19:35:01 crc kubenswrapper[4942]: I0218 19:35:01.396694 4942 generic.go:334] "Generic (PLEG): container finished" podID="1e919317-cae2-432d-959f-8cf1d4520b56" containerID="c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7" exitCode=0 Feb 18 19:35:01 crc kubenswrapper[4942]: I0218 19:35:01.396828 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerDied","Data":"c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7"} Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.182175 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zw8ls" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.310662 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") pod \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.310756 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") pod \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.310842 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") pod \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.310886 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") pod \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.320507 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" (UID: "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.320886 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8" (OuterVolumeSpecName: "kube-api-access-gb9h8") pod "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" (UID: "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3"). InnerVolumeSpecName "kube-api-access-gb9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.343722 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" (UID: "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.371651 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data" (OuterVolumeSpecName: "config-data") pod "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" (UID: "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.408339 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zw8ls" event={"ID":"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3","Type":"ContainerDied","Data":"e983b61464f792023c5c202bd16dd9437e3b945f9e2f82c09b596638a70e9520"} Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.408375 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e983b61464f792023c5c202bd16dd9437e3b945f9e2f82c09b596638a70e9520" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.408427 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zw8ls" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.414280 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.414316 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.414330 4942 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.414341 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.856171 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.635770 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:03 crc kubenswrapper[4942]: E0218 19:35:03.636367 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" containerName="glance-db-sync" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.636383 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" containerName="glance-db-sync" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.636563 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" containerName="glance-db-sync" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.644440 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.650464 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.736944 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.736995 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.737035 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.737104 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.737130 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.737171 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838616 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838690 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838846 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838884 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838947 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.840098 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.840201 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.840440 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.840839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.841180 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.873542 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.966379 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.537644 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.539036 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.540970 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j6c2t" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.541185 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.541305 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.552357 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652280 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652345 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652424 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652491 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652513 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652577 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652648 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.754433 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755016 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755084 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755213 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755269 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755320 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755377 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.756236 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.754880 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.756855 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.762005 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.764777 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.772052 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.773191 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.776344 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.783141 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.785593 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.794682 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.815234 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856610 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856742 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856807 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856832 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856851 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856881 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856909 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.863840 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958132 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958194 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958224 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958264 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958306 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958347 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958421 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958542 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.959634 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.961718 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.961901 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.964591 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.975538 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.976540 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.986546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:05 crc kubenswrapper[4942]: I0218 19:35:05.174635 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:06 crc kubenswrapper[4942]: I0218 19:35:06.258810 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:06 crc kubenswrapper[4942]: I0218 19:35:06.338582 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:07 crc kubenswrapper[4942]: I0218 19:35:07.856098 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Feb 18 19:35:09 crc kubenswrapper[4942]: E0218 19:35:09.838209 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 19:35:09 crc kubenswrapper[4942]: E0218 19:35:09.838886 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b9h64dh5bdh596h588hb7h6h5d6h654h5c7h558hdch5ffh686h68ch5ddh686h5f9hcbh544h587h55fhd7h5c9h676hfhc4h5cdh674h599hd4h547q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8qtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6487999dc5-x92k5_openstack(c4f4df56-7f3e-490d-9321-dc520b65369a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:09 crc kubenswrapper[4942]: E0218 19:35:09.879327 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6487999dc5-x92k5" podUID="c4f4df56-7f3e-490d-9321-dc520b65369a" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.645753 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.646298 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bfh679h78h649h5ffh5ch5b6h56dhf7h54h86h57hbch68fhc8h557h8fh5c7h5b5h6fhb4h5fh5ddh56fh595h5dfh55fh566h5f6h64h84h86q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwckx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5dcf8ff489-qc7h7_openstack(79f6285d-991e-4118-8f5b-d451c225f1d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.648310 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5dcf8ff489-qc7h7" podUID="79f6285d-991e-4118-8f5b-d451c225f1d6" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.652961 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.653088 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n598h5c7h4h64bh55ch55ch688h668h56h676h5fdhd5h5b9h589h697h8dh57dhdbh568hf5h655h566h579h99h55bh66dh544h594h66h7ch646h568q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbrsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7d6d8bb5d5-5l49m_openstack(29a05f17-8ada-451f-8460-887a45caa4e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.655298 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7d6d8bb5d5-5l49m" podUID="29a05f17-8ada-451f-8460-887a45caa4e6" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.776577 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879161 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879222 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879247 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879333 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879432 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879477 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.885710 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts" (OuterVolumeSpecName: "scripts") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.886470 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.886997 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5" (OuterVolumeSpecName: "kube-api-access-9csq5") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "kube-api-access-9csq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.894441 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.907352 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.913640 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data" (OuterVolumeSpecName: "config-data") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981458 4942 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981490 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981500 4942 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981508 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981517 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981527 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.494438 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.494457 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wknkh" event={"ID":"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0","Type":"ContainerDied","Data":"262290f48bc9f52e9ad2af485330819793bdd52215504ccea4c7c0b79cc77dac"} Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.494509 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="262290f48bc9f52e9ad2af485330819793bdd52215504ccea4c7c0b79cc77dac" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.496351 4942 generic.go:334] "Generic (PLEG): container finished" podID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" containerID="1f69a1fd29ab925cd8cf8e9aff116531b62f274c86f6998747eb096250393ed9" exitCode=0 Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.496389 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p9l27" event={"ID":"a6c912f7-7ee8-4f53-a358-a6a6a5088be5","Type":"ContainerDied","Data":"1f69a1fd29ab925cd8cf8e9aff116531b62f274c86f6998747eb096250393ed9"} Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.896483 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.904788 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.978953 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:35:12 crc kubenswrapper[4942]: E0218 19:35:12.979301 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" containerName="keystone-bootstrap" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.979317 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" containerName="keystone-bootstrap" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.979535 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" containerName="keystone-bootstrap" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.980364 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.982619 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.982739 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.982776 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9szpl" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.982970 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.983227 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.998005 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.048649 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" path="/var/lib/kubelet/pods/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0/volumes" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101319 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101371 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101400 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101439 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.102237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203237 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203530 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203608 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203630 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203686 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.209336 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.216668 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.219810 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.220036 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.220802 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.222404 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.297118 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:17 crc kubenswrapper[4942]: I0218 19:35:17.857389 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Feb 18 19:35:17 crc kubenswrapper[4942]: I0218 19:35:17.857927 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.326611 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.528532 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.528713 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.528896 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.529328 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs" (OuterVolumeSpecName: "logs") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.529689 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts" (OuterVolumeSpecName: "scripts") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.529803 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.529838 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.530338 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data" (OuterVolumeSpecName: "config-data") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.530848 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.530879 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.530896 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.536463 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.536962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj" (OuterVolumeSpecName: "kube-api-access-r8qtj") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "kube-api-access-r8qtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.553459 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6487999dc5-x92k5" event={"ID":"c4f4df56-7f3e-490d-9321-dc520b65369a","Type":"ContainerDied","Data":"ac381e3f114e8f2e0ca2ad49412144e5bd5345aa14e469a41eeec38b75b61e1c"} Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.553556 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.631635 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.631662 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.691187 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.699352 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:35:19 crc kubenswrapper[4942]: E0218 19:35:19.798496 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 18 19:35:19 crc kubenswrapper[4942]: E0218 19:35:19.798668 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdzfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-h2kjs_openstack(8aeac097-ba93-4859-a14f-839ae1421e28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:19 crc kubenswrapper[4942]: E0218 19:35:19.799864 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-h2kjs" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.848551 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.856879 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.900018 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.907251 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p9l27" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937393 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") pod \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937471 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") pod \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937550 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937641 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937711 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937736 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937814 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937860 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937988 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938051 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938076 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938144 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938217 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938293 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") pod \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938327 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939054 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts" (OuterVolumeSpecName: "scripts") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939638 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs" (OuterVolumeSpecName: "logs") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939800 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939839 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939870 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939838 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data" (OuterVolumeSpecName: "config-data") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940289 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs" (OuterVolumeSpecName: "logs") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940543 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data" (OuterVolumeSpecName: "config-data") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940790 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940812 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940829 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940841 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940853 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.945981 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx" (OuterVolumeSpecName: "kube-api-access-mwckx") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "kube-api-access-mwckx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.946374 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts" (OuterVolumeSpecName: "scripts") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.964353 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.964400 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd" (OuterVolumeSpecName: "kube-api-access-hbrsd") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "kube-api-access-hbrsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.964935 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.978262 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8" (OuterVolumeSpecName: "kube-api-access-d99k8") pod "a6c912f7-7ee8-4f53-a358-a6a6a5088be5" (UID: "a6c912f7-7ee8-4f53-a358-a6a6a5088be5"). InnerVolumeSpecName "kube-api-access-d99k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.984237 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6" (OuterVolumeSpecName: "kube-api-access-q8vt6") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "kube-api-access-q8vt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.994217 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6c912f7-7ee8-4f53-a358-a6a6a5088be5" (UID: "a6c912f7-7ee8-4f53-a358-a6a6a5088be5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.000292 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config" (OuterVolumeSpecName: "config") pod "a6c912f7-7ee8-4f53-a358-a6a6a5088be5" (UID: "a6c912f7-7ee8-4f53-a358-a6a6a5088be5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.012969 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config" (OuterVolumeSpecName: "config") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.016238 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.017524 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.026747 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042190 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042227 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042239 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042249 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042259 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042270 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042282 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042292 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042301 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042310 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042320 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042329 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042340 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.563864 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerDied","Data":"78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c"} Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.563919 4942 scope.go:117] "RemoveContainer" containerID="c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.563931 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.565525 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p9l27" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.566224 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p9l27" event={"ID":"a6c912f7-7ee8-4f53-a358-a6a6a5088be5","Type":"ContainerDied","Data":"316d5107b8b347fd0cea3be7273208da7013d9d15ad9e9d0440db47bc1ed0d8e"} Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.566247 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316d5107b8b347fd0cea3be7273208da7013d9d15ad9e9d0440db47bc1ed0d8e" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.572378 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dcf8ff489-qc7h7" event={"ID":"79f6285d-991e-4118-8f5b-d451c225f1d6","Type":"ContainerDied","Data":"f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89"} Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.572429 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.575037 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.575036 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6d8bb5d5-5l49m" event={"ID":"29a05f17-8ada-451f-8460-887a45caa4e6","Type":"ContainerDied","Data":"f9f400d74dcc827f603d02436cc05b6b30e0d9e44bb3117a942b80c1685b87ee"} Feb 18 19:35:20 crc kubenswrapper[4942]: E0218 19:35:20.575910 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-h2kjs" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.624432 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.634019 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.668122 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.686228 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.704886 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.713327 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.028632 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.028967 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z75l6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qvzh5_openstack(8db7f68b-a733-44fc-90b9-a1dd489fb42d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.031206 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qvzh5" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.052385 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" path="/var/lib/kubelet/pods/1e919317-cae2-432d-959f-8cf1d4520b56/volumes" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.053679 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a05f17-8ada-451f-8460-887a45caa4e6" path="/var/lib/kubelet/pods/29a05f17-8ada-451f-8460-887a45caa4e6/volumes" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.057830 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f6285d-991e-4118-8f5b-d451c225f1d6" path="/var/lib/kubelet/pods/79f6285d-991e-4118-8f5b-d451c225f1d6/volumes" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.058705 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f4df56-7f3e-490d-9321-dc520b65369a" path="/var/lib/kubelet/pods/c4f4df56-7f3e-490d-9321-dc520b65369a/volumes" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.119275 4942 scope.go:117] "RemoveContainer" containerID="2d800ad31d40bf814e416ec398183ae11509cddedf514a96b60bf309617fbbde" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.216471 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.293877 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.330152 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.330245 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.330327 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" containerName="neutron-db-sync" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.330337 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" containerName="neutron-db-sync" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.330361 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="init" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.330369 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="init" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.340156 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" containerName="neutron-db-sync" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.340199 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.345219 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.345340 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.366077 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.369351 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.371840 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.383836 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pc4kw" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.384094 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.390520 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.394642 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.488460 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489516 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489613 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489844 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489960 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490025 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490096 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490184 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490264 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490384 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.590111 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qvzh5" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591452 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591505 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591524 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591543 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591585 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591601 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591644 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591679 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591708 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591729 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591749 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.592833 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.593600 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.593884 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.594218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.597345 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.605482 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.605632 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.608567 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.612496 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.612548 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.617549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.766540 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.775122 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.775477 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.822405 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.943022 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b6b6597b8-m8ngr"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.001969 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.041355 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.092724 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.348451 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.565005 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.605640 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnqg7" event={"ID":"f29ae8a1-b3cc-452c-ac99-b450ef3125d8","Type":"ContainerStarted","Data":"16fd17087ed9bd06ba590a2897d1853b93c4e9cb882e3c311955fd4cf453c84b"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.605702 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnqg7" event={"ID":"f29ae8a1-b3cc-452c-ac99-b450ef3125d8","Type":"ContainerStarted","Data":"6ec2961c66c2e9651f7fa79f615b6ade4d1fc9deb7327e765b2f6fda45cc46c8"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.615211 4942 generic.go:334] "Generic (PLEG): container finished" podID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" containerID="b13ac4955f984728a414f0dd111c2e579b7dc9058268103695046c6e78fc7cfc" exitCode=0 Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.615307 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" event={"ID":"47732c7e-8c0f-4244-bddb-98bf7b21d2db","Type":"ContainerDied","Data":"b13ac4955f984728a414f0dd111c2e579b7dc9058268103695046c6e78fc7cfc"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.615335 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" event={"ID":"47732c7e-8c0f-4244-bddb-98bf7b21d2db","Type":"ContainerStarted","Data":"8337fe8032827581404d71567c1183946117f42260043a9aad5e272dceb8f9f6"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.620497 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerStarted","Data":"16cfdf5777da304074f8658c0e294de7985ac237e0c31312cdfc21ceef0ca88c"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.622424 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerStarted","Data":"a08610a6a430e153a9003711c6d5df1b3e69d004820a8579935266408e2afede"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.625043 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6b6597b8-m8ngr" event={"ID":"55d24776-2d1c-413a-8ba1-06cdadf63d04","Type":"ContainerStarted","Data":"8e07cc3497636460ed1799cf2428d3afb905f2937022e98e328e60ed8e665be5"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.626907 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.634036 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tnqg7" podStartSLOduration=10.634017397000001 podStartE2EDuration="10.634017397s" podCreationTimestamp="2026-02-18 19:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:22.630896476 +0000 UTC m=+1082.335829171" watchObservedRunningTime="2026-02-18 19:35:22.634017397 +0000 UTC m=+1082.338950062" Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.645347 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9ntpw" event={"ID":"af8e769c-00c3-41a1-97c4-d91902767dfe","Type":"ContainerStarted","Data":"a5a266a5f35f400b4926f114a4e397e8de76de3f56a176f14c64d1b553d123f4"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.652537 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerStarted","Data":"f9c6502e1e5809e23b3664eb42d069f99f7705e9a66bf07935b4912b98778c64"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.666292 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4h9n5" event={"ID":"983d5293-8413-4a29-88b2-ba775b3b4a8b","Type":"ContainerStarted","Data":"96103ab065d78416959c1d84cf5d96a95a67496c5bf29a0bff2dd2c96318a211"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.675325 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.682944 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerStarted","Data":"a28152676e5bbeaa52dbf0acfa190644662ce9fce2d0b5f7310504317b4faf82"} Feb 18 19:35:22 crc kubenswrapper[4942]: W0218 19:35:22.707138 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1019761a_2eb2_43f0_bce6_94e8b11a5c6a.slice/crio-a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9 WatchSource:0}: Error finding container a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9: Status 404 returned error can't find the container with id a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9 Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.716700 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9ntpw" podStartSLOduration=5.071064364 podStartE2EDuration="33.716679316s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="2026-02-18 19:34:51.139008067 +0000 UTC m=+1050.843940732" lastFinishedPulling="2026-02-18 19:35:19.784623019 +0000 UTC m=+1079.489555684" observedRunningTime="2026-02-18 19:35:22.689220002 +0000 UTC m=+1082.394152667" watchObservedRunningTime="2026-02-18 19:35:22.716679316 +0000 UTC m=+1082.421611981" Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.732556 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-4h9n5" podStartSLOduration=8.852964198 podStartE2EDuration="1m5.732538248s" podCreationTimestamp="2026-02-18 19:34:17 +0000 UTC" firstStartedPulling="2026-02-18 19:34:24.184447176 +0000 UTC m=+1023.889379841" lastFinishedPulling="2026-02-18 19:35:21.064021226 +0000 UTC m=+1080.768953891" observedRunningTime="2026-02-18 19:35:22.705122715 +0000 UTC m=+1082.410055370" watchObservedRunningTime="2026-02-18 19:35:22.732538248 +0000 UTC m=+1082.437470913" Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.858617 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.148203 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347487 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347553 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347574 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347666 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347732 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347749 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.373966 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l" (OuterVolumeSpecName: "kube-api-access-d8v6l") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "kube-api-access-d8v6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.394311 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.422683 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.423274 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config" (OuterVolumeSpecName: "config") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.423529 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.433291 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452491 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452533 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452549 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452561 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452573 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452585 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.704854 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6b6597b8-m8ngr" event={"ID":"55d24776-2d1c-413a-8ba1-06cdadf63d04","Type":"ContainerStarted","Data":"fdd3811b77cebb81cb4d835bd7bb9549dffa32c8c00fba3295d69f115674b90e"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.707363 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6b6597b8-m8ngr" event={"ID":"55d24776-2d1c-413a-8ba1-06cdadf63d04","Type":"ContainerStarted","Data":"0f654f8decc1fb809c31df37ac391bf8043913d039ad32343d90b0ea671290c4"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.713314 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerStarted","Data":"4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.713357 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerStarted","Data":"036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.737422 4942 generic.go:334] "Generic (PLEG): container finished" podID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerID="9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe" exitCode=0 Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.737509 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerDied","Data":"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.740990 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" event={"ID":"47732c7e-8c0f-4244-bddb-98bf7b21d2db","Type":"ContainerDied","Data":"8337fe8032827581404d71567c1183946117f42260043a9aad5e272dceb8f9f6"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.741060 4942 scope.go:117] "RemoveContainer" containerID="b13ac4955f984728a414f0dd111c2e579b7dc9058268103695046c6e78fc7cfc" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.741228 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.750137 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b6b6597b8-m8ngr" podStartSLOduration=23.794370455 podStartE2EDuration="24.750114549s" podCreationTimestamp="2026-02-18 19:34:59 +0000 UTC" firstStartedPulling="2026-02-18 19:35:21.985948271 +0000 UTC m=+1081.690880936" lastFinishedPulling="2026-02-18 19:35:22.941692355 +0000 UTC m=+1082.646625030" observedRunningTime="2026-02-18 19:35:23.729604046 +0000 UTC m=+1083.434536711" watchObservedRunningTime="2026-02-18 19:35:23.750114549 +0000 UTC m=+1083.455047214" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.771662 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerStarted","Data":"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.771703 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerStarted","Data":"a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.811045 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-54d64cf59b-xp7rk" podStartSLOduration=23.777483076 podStartE2EDuration="24.811019023s" podCreationTimestamp="2026-02-18 19:34:59 +0000 UTC" firstStartedPulling="2026-02-18 19:35:21.812360148 +0000 UTC m=+1081.517292813" lastFinishedPulling="2026-02-18 19:35:22.845896105 +0000 UTC m=+1082.550828760" observedRunningTime="2026-02-18 19:35:23.767859091 +0000 UTC m=+1083.472791756" watchObservedRunningTime="2026-02-18 19:35:23.811019023 +0000 UTC m=+1083.515951678" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.811395 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerStarted","Data":"686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.811436 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerStarted","Data":"8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.811477 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.815003 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerStarted","Data":"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.815037 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:23 crc kubenswrapper[4942]: E0218 19:35:23.815727 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" containerName="init" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.815750 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" containerName="init" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.815922 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" containerName="init" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.827875 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.828356 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.831477 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.831916 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.875246 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.875304 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.875500 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.875899 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.884166 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.884780 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.886139 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.908852 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.925982 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.946894 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67cc44d6c6-sp59w" podStartSLOduration=2.946874223 podStartE2EDuration="2.946874223s" podCreationTimestamp="2026-02-18 19:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:23.91634484 +0000 UTC m=+1083.621277495" watchObservedRunningTime="2026-02-18 19:35:23.946874223 +0000 UTC m=+1083.651806888" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987714 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987785 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987806 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987877 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987915 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987936 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987964 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.996578 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.997140 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.000595 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.001423 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.006923 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.008608 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.018473 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.196832 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.841561 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerStarted","Data":"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698"} Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.841981 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-log" containerID="cri-o://9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" gracePeriod=30 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.842410 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-httpd" containerID="cri-o://c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" gracePeriod=30 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.851247 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerStarted","Data":"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530"} Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.851404 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-log" containerID="cri-o://286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" gracePeriod=30 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.851521 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-httpd" containerID="cri-o://608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" gracePeriod=30 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.859904 4942 generic.go:334] "Generic (PLEG): container finished" podID="af8e769c-00c3-41a1-97c4-d91902767dfe" containerID="a5a266a5f35f400b4926f114a4e397e8de76de3f56a176f14c64d1b553d123f4" exitCode=0 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.860294 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9ntpw" event={"ID":"af8e769c-00c3-41a1-97c4-d91902767dfe","Type":"ContainerDied","Data":"a5a266a5f35f400b4926f114a4e397e8de76de3f56a176f14c64d1b553d123f4"} Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.882974 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerStarted","Data":"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587"} Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.883918 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.948546 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.948524871 podStartE2EDuration="21.948524871s" podCreationTimestamp="2026-02-18 19:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:24.902367501 +0000 UTC m=+1084.607300196" watchObservedRunningTime="2026-02-18 19:35:24.948524871 +0000 UTC m=+1084.653457536" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.951094 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.951083047 podStartE2EDuration="21.951083047s" podCreationTimestamp="2026-02-18 19:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:24.86693196 +0000 UTC m=+1084.571864635" watchObservedRunningTime="2026-02-18 19:35:24.951083047 +0000 UTC m=+1084.656015712" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.963365 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" podStartSLOduration=3.963345026 podStartE2EDuration="3.963345026s" podCreationTimestamp="2026-02-18 19:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:24.939341572 +0000 UTC m=+1084.644274247" watchObservedRunningTime="2026-02-18 19:35:24.963345026 +0000 UTC m=+1084.668277691" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.051117 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" path="/var/lib/kubelet/pods/47732c7e-8c0f-4244-bddb-98bf7b21d2db/volumes" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.395207 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.618035 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748084 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748308 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748412 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748436 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748481 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748498 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748644 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.749069 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs" (OuterVolumeSpecName: "logs") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.749092 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.753435 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx" (OuterVolumeSpecName: "kube-api-access-n92mx") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "kube-api-access-n92mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.761045 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.763323 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts" (OuterVolumeSpecName: "scripts") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.763858 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.781135 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.849955 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.849987 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.849983 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data" (OuterVolumeSpecName: "config-data") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850029 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850066 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850087 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850108 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850140 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850163 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850446 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850468 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850479 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850524 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850532 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850612 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs" (OuterVolumeSpecName: "logs") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850953 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: W0218 19:35:25.853100 4942 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1019761a-2eb2-43f0-bce6-94e8b11a5c6a/volumes/kubernetes.io~secret/config-data Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.853121 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data" (OuterVolumeSpecName: "config-data") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.873326 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.875365 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts" (OuterVolumeSpecName: "scripts") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.895810 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerStarted","Data":"5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.895851 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerStarted","Data":"66f57c246570cb64775a601036f5870a5885605c57cb8be2088eae510c596f8b"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897468 4942 generic.go:334] "Generic (PLEG): container finished" podID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" exitCode=0 Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897490 4942 generic.go:334] "Generic (PLEG): container finished" podID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" exitCode=143 Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897522 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerDied","Data":"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897540 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerDied","Data":"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897550 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerDied","Data":"a08610a6a430e153a9003711c6d5df1b3e69d004820a8579935266408e2afede"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897565 4942 scope.go:117] "RemoveContainer" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897743 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.905724 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd" (OuterVolumeSpecName: "kube-api-access-r6mcd") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "kube-api-access-r6mcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.906215 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.906927 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.908056 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.911789 4942 generic.go:334] "Generic (PLEG): container finished" podID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" exitCode=143 Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.911817 4942 generic.go:334] "Generic (PLEG): container finished" podID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" exitCode=143 Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.912094 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.912080 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerDied","Data":"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.912382 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerDied","Data":"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.912404 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerDied","Data":"a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.939244 4942 scope.go:117] "RemoveContainer" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951753 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951807 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951816 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951825 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951836 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951844 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951853 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951861 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.956031 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.984160 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data" (OuterVolumeSpecName: "config-data") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.993180 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.007119 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017077 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.017615 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017632 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.017650 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017657 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.017695 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017703 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.017718 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017723 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017918 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017935 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017944 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017986 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.019112 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020135 4942 scope.go:117] "RemoveContainer" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.020545 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": container with ID starting with 608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530 not found: ID does not exist" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020573 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530"} err="failed to get container status \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": rpc error: code = NotFound desc = could not find container \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": container with ID starting with 608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020594 4942 scope.go:117] "RemoveContainer" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.020772 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": container with ID starting with 286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e not found: ID does not exist" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020792 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e"} err="failed to get container status \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": rpc error: code = NotFound desc = could not find container \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": container with ID starting with 286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020806 4942 scope.go:117] "RemoveContainer" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021019 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530"} err="failed to get container status \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": rpc error: code = NotFound desc = could not find container \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": container with ID starting with 608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021056 4942 scope.go:117] "RemoveContainer" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021300 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e"} err="failed to get container status \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": rpc error: code = NotFound desc = could not find container \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": container with ID starting with 286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021314 4942 scope.go:117] "RemoveContainer" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021777 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021958 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.028375 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.053062 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.053101 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.106175 4942 scope.go:117] "RemoveContainer" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.155571 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.155820 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.155958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.155992 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.156059 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.156109 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.156150 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.156240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.178008 4942 scope.go:117] "RemoveContainer" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.178430 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": container with ID starting with c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698 not found: ID does not exist" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.178465 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698"} err="failed to get container status \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": rpc error: code = NotFound desc = could not find container \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": container with ID starting with c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.178491 4942 scope.go:117] "RemoveContainer" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.180443 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": container with ID starting with 9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7 not found: ID does not exist" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.180492 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7"} err="failed to get container status \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": rpc error: code = NotFound desc = could not find container \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": container with ID starting with 9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.180526 4942 scope.go:117] "RemoveContainer" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.180891 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698"} err="failed to get container status \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": rpc error: code = NotFound desc = could not find container \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": container with ID starting with c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.180916 4942 scope.go:117] "RemoveContainer" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.183073 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7"} err="failed to get container status \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": rpc error: code = NotFound desc = could not find container \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": container with ID starting with 9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257625 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257682 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257740 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257849 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257874 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257893 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257916 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.260500 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.261466 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.262744 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.265122 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.266275 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.267523 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.273338 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.283661 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.322985 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.348240 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.417000 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.448586 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.461611 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.463634 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.467150 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.467385 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.468827 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.505248 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9ntpw" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.567116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.567184 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572091 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572162 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572211 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572231 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572334 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572501 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.673366 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.673418 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.673575 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.674204 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs" (OuterVolumeSpecName: "logs") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.685895 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686135 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686503 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686563 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686624 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686708 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686745 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686801 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686823 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686940 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.687097 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.690389 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.694577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.694826 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.698540 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.705966 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr" (OuterVolumeSpecName: "kube-api-access-9dmdr") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "kube-api-access-9dmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.705975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts" (OuterVolumeSpecName: "scripts") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.729406 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.729503 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.731131 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.737750 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.753887 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data" (OuterVolumeSpecName: "config-data") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.754153 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.784883 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.790510 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.790546 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.790557 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.790568 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.812733 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.926366 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9ntpw" event={"ID":"af8e769c-00c3-41a1-97c4-d91902767dfe","Type":"ContainerDied","Data":"eb7a8e3a23f3477cac51aacb10a95d5378f6772c63aae9e96752efd516b0a2a1"} Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.926415 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7a8e3a23f3477cac51aacb10a95d5378f6772c63aae9e96752efd516b0a2a1" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.926492 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9ntpw" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.946244 4942 generic.go:334] "Generic (PLEG): container finished" podID="983d5293-8413-4a29-88b2-ba775b3b4a8b" containerID="96103ab065d78416959c1d84cf5d96a95a67496c5bf29a0bff2dd2c96318a211" exitCode=0 Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.946310 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4h9n5" event={"ID":"983d5293-8413-4a29-88b2-ba775b3b4a8b","Type":"ContainerDied","Data":"96103ab065d78416959c1d84cf5d96a95a67496c5bf29a0bff2dd2c96318a211"} Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.989215 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerStarted","Data":"531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795"} Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.991203 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.010478 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:27 crc kubenswrapper[4942]: W0218 19:35:27.023653 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc47abc8_8f2f_41c6_96c3_d6e81388e5b2.slice/crio-5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c WatchSource:0}: Error finding container 5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c: Status 404 returned error can't find the container with id 5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.069637 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" path="/var/lib/kubelet/pods/1019761a-2eb2-43f0-bce6-94e8b11a5c6a/volumes" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.070614 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" path="/var/lib/kubelet/pods/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3/volumes" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.087191 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podStartSLOduration=4.087173574 podStartE2EDuration="4.087173574s" podCreationTimestamp="2026-02-18 19:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:27.020970333 +0000 UTC m=+1086.725903008" watchObservedRunningTime="2026-02-18 19:35:27.087173574 +0000 UTC m=+1086.792106239" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.117940 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:35:27 crc kubenswrapper[4942]: E0218 19:35:27.118394 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e769c-00c3-41a1-97c4-d91902767dfe" containerName="placement-db-sync" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.118415 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e769c-00c3-41a1-97c4-d91902767dfe" containerName="placement-db-sync" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.118617 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e769c-00c3-41a1-97c4-d91902767dfe" containerName="placement-db-sync" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.119532 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.125986 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.126149 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.131992 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.132367 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z4q86" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.132521 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.132659 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201717 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201793 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201823 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201912 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201987 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.202032 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.305468 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.305851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306019 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306129 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306181 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306222 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306254 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.308286 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.312865 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.313142 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.313634 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.314359 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.314424 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.326930 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.463503 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.503122 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:27 crc kubenswrapper[4942]: W0218 19:35:27.528869 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cd0efdc_b208_4270_9c23_33e01f7298be.slice/crio-a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69 WatchSource:0}: Error finding container a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69: Status 404 returned error can't find the container with id a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69 Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.052205 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerStarted","Data":"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2"} Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.052693 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerStarted","Data":"5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c"} Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.052708 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.061702 4942 generic.go:334] "Generic (PLEG): container finished" podID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" containerID="16fd17087ed9bd06ba590a2897d1853b93c4e9cb882e3c311955fd4cf453c84b" exitCode=0 Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.061828 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnqg7" event={"ID":"f29ae8a1-b3cc-452c-ac99-b450ef3125d8","Type":"ContainerDied","Data":"16fd17087ed9bd06ba590a2897d1853b93c4e9cb882e3c311955fd4cf453c84b"} Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.063561 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerStarted","Data":"a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69"} Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.716027 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.847664 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") pod \"983d5293-8413-4a29-88b2-ba775b3b4a8b\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.847847 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") pod \"983d5293-8413-4a29-88b2-ba775b3b4a8b\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.847935 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") pod \"983d5293-8413-4a29-88b2-ba775b3b4a8b\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.847968 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") pod \"983d5293-8413-4a29-88b2-ba775b3b4a8b\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.852282 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "983d5293-8413-4a29-88b2-ba775b3b4a8b" (UID: "983d5293-8413-4a29-88b2-ba775b3b4a8b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.860003 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf" (OuterVolumeSpecName: "kube-api-access-mfbhf") pod "983d5293-8413-4a29-88b2-ba775b3b4a8b" (UID: "983d5293-8413-4a29-88b2-ba775b3b4a8b"). InnerVolumeSpecName "kube-api-access-mfbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.902918 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "983d5293-8413-4a29-88b2-ba775b3b4a8b" (UID: "983d5293-8413-4a29-88b2-ba775b3b4a8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.923298 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data" (OuterVolumeSpecName: "config-data") pod "983d5293-8413-4a29-88b2-ba775b3b4a8b" (UID: "983d5293-8413-4a29-88b2-ba775b3b4a8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.949777 4942 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.949806 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.949815 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.949824 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.107790 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerStarted","Data":"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.110350 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4h9n5" event={"ID":"983d5293-8413-4a29-88b2-ba775b3b4a8b","Type":"ContainerDied","Data":"4d89390c95728bcf123b54a9e3391d1834069387fcdf07d8c1f1a0845cb094b5"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.110378 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d89390c95728bcf123b54a9e3391d1834069387fcdf07d8c1f1a0845cb094b5" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.110438 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.134814 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.13479775 podStartE2EDuration="4.13479775s" podCreationTimestamp="2026-02-18 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:29.118124057 +0000 UTC m=+1088.823056722" watchObservedRunningTime="2026-02-18 19:35:29.13479775 +0000 UTC m=+1088.839730415" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.139657 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerStarted","Data":"1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.163663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerStarted","Data":"9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.163699 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerStarted","Data":"f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.163708 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerStarted","Data":"5655340f4bf0abd595b0c47b02dacb9178105661696797fd33a844b3ed3d1922"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.168121 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.168159 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.194806 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5794bf846d-82xzg" podStartSLOduration=2.194783 podStartE2EDuration="2.194783s" podCreationTimestamp="2026-02-18 19:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:29.192447799 +0000 UTC m=+1088.897380464" watchObservedRunningTime="2026-02-18 19:35:29.194783 +0000 UTC m=+1088.899715665" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.259677 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: E0218 19:35:29.260203 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" containerName="watcher-db-sync" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.260215 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" containerName="watcher-db-sync" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.260446 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" containerName="watcher-db-sync" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.261326 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.265248 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-jp82k" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.265439 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.282696 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.345866 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.346969 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.350783 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357077 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357113 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ldkj\" (UniqueName: \"kubernetes.io/projected/e9b5326c-208f-40ba-b395-8a6cf6b52399-kube-api-access-2ldkj\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357127 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357156 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357180 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357266 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5326c-208f-40ba-b395-8a6cf6b52399-logs\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357280 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-config-data\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357303 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357348 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.376069 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.378449 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.386027 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.396923 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.403589 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459201 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459262 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459290 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459354 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ldkj\" (UniqueName: \"kubernetes.io/projected/e9b5326c-208f-40ba-b395-8a6cf6b52399-kube-api-access-2ldkj\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459389 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459416 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459445 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459470 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459505 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459557 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5326c-208f-40ba-b395-8a6cf6b52399-logs\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459574 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-config-data\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459605 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459638 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.460022 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.460434 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5326c-208f-40ba-b395-8a6cf6b52399-logs\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.466091 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.468114 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.479491 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.482785 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-config-data\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.484484 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.490836 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.490915 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ldkj\" (UniqueName: \"kubernetes.io/projected/e9b5326c-208f-40ba-b395-8a6cf6b52399-kube-api-access-2ldkj\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.527240 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.527405 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564220 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564279 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564316 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564414 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564465 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.566617 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.578827 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.579861 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.580971 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.586065 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.614549 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.614611 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.621961 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.687585 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.720193 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.798182 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.883621 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.883951 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.883980 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.884135 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.884173 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.884205 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.891387 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts" (OuterVolumeSpecName: "scripts") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.891921 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.893301 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f" (OuterVolumeSpecName: "kube-api-access-28x9f") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "kube-api-access-28x9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.908592 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.923040 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data" (OuterVolumeSpecName: "config-data") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.955613 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985935 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985962 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985972 4942 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985980 4942 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985988 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985998 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.234038 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.244850 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnqg7" event={"ID":"f29ae8a1-b3cc-452c-ac99-b450ef3125d8","Type":"ContainerDied","Data":"6ec2961c66c2e9651f7fa79f615b6ade4d1fc9deb7327e765b2f6fda45cc46c8"} Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.244899 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ec2961c66c2e9651f7fa79f615b6ade4d1fc9deb7327e765b2f6fda45cc46c8" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.244989 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.250550 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-56897c69bf-gkt87"] Feb 18 19:35:30 crc kubenswrapper[4942]: E0218 19:35:30.251027 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" containerName="keystone-bootstrap" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.251045 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" containerName="keystone-bootstrap" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.251279 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" containerName="keystone-bootstrap" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.252126 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.260487 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9szpl" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.261219 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.261625 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.263976 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.264200 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.265221 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.285364 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56897c69bf-gkt87"] Feb 18 19:35:30 crc kubenswrapper[4942]: W0218 19:35:30.293252 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cf66c1e_2f67_4785_85e9_f0b06e578d29.slice/crio-e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348 WatchSource:0}: Error finding container e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348: Status 404 returned error can't find the container with id e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348 Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302621 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-internal-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302806 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-fernet-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302846 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-public-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302937 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-credential-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302998 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-scripts\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.303023 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsrsr\" (UniqueName: \"kubernetes.io/projected/df16a440-84af-448f-a26c-9407514d1eda-kube-api-access-tsrsr\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.303048 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-config-data\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.303068 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-combined-ca-bundle\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.388872 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.402629 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404750 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-fernet-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404809 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-public-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404849 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-credential-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404889 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-scripts\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404911 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsrsr\" (UniqueName: \"kubernetes.io/projected/df16a440-84af-448f-a26c-9407514d1eda-kube-api-access-tsrsr\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404935 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-combined-ca-bundle\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404957 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-config-data\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.405054 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-internal-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.412517 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-scripts\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.413207 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-internal-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.413364 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-fernet-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.413489 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-public-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.414683 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-config-data\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.417871 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-combined-ca-bundle\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.422216 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-credential-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.438243 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsrsr\" (UniqueName: \"kubernetes.io/projected/df16a440-84af-448f-a26c-9407514d1eda-kube-api-access-tsrsr\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.653188 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:31 crc kubenswrapper[4942]: W0218 19:35:31.150976 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf16a440_84af_448f_a26c_9407514d1eda.slice/crio-de57c852e3f6a3a01d4d6f462a0802e52e93276dbf9a642383bd559fcb122c17 WatchSource:0}: Error finding container de57c852e3f6a3a01d4d6f462a0802e52e93276dbf9a642383bd559fcb122c17: Status 404 returned error can't find the container with id de57c852e3f6a3a01d4d6f462a0802e52e93276dbf9a642383bd559fcb122c17 Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.170360 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56897c69bf-gkt87"] Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.258106 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"e9b5326c-208f-40ba-b395-8a6cf6b52399","Type":"ContainerStarted","Data":"1d5d88a2fcdce7e521ce89119f4619b9b739dd19a073e77bd14da576b76cc719"} Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.261267 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56897c69bf-gkt87" event={"ID":"df16a440-84af-448f-a26c-9407514d1eda","Type":"ContainerStarted","Data":"de57c852e3f6a3a01d4d6f462a0802e52e93276dbf9a642383bd559fcb122c17"} Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.272487 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9cf66c1e-2f67-4785-85e9-f0b06e578d29","Type":"ContainerStarted","Data":"e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348"} Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.279009 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerStarted","Data":"796b3cc6f87bbc8cea79f9f672a04a291cbb2f04782a6f0d27d4592a418cd947"} Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.768935 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.838336 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.838558 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="dnsmasq-dns" containerID="cri-o://cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" gracePeriod=10 Feb 18 19:35:32 crc kubenswrapper[4942]: I0218 19:35:32.289455 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56897c69bf-gkt87" event={"ID":"df16a440-84af-448f-a26c-9407514d1eda","Type":"ContainerStarted","Data":"954b02989454405595ca18cbf334497a3b6b28da79771c0bf9e99c46d8b5cd6a"} Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.075800 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184304 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184507 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184539 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184591 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184616 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184677 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.193264 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn" (OuterVolumeSpecName: "kube-api-access-cs5jn") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "kube-api-access-cs5jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.236884 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.249388 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.251115 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config" (OuterVolumeSpecName: "config") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.256227 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.266197 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287086 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287247 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287307 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287360 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287410 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287459 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306523 4942 generic.go:334] "Generic (PLEG): container finished" podID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerID="cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" exitCode=0 Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306586 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306599 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerDied","Data":"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34"} Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306664 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerDied","Data":"d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014"} Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306689 4942 scope.go:117] "RemoveContainer" containerID="cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.335476 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-56897c69bf-gkt87" podStartSLOduration=3.335451583 podStartE2EDuration="3.335451583s" podCreationTimestamp="2026-02-18 19:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:33.32609721 +0000 UTC m=+1093.031029885" watchObservedRunningTime="2026-02-18 19:35:33.335451583 +0000 UTC m=+1093.040384248" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.374955 4942 scope.go:117] "RemoveContainer" containerID="64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.385984 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.394678 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.407451 4942 scope.go:117] "RemoveContainer" containerID="cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" Feb 18 19:35:33 crc kubenswrapper[4942]: E0218 19:35:33.407868 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34\": container with ID starting with cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34 not found: ID does not exist" containerID="cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.407895 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34"} err="failed to get container status \"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34\": rpc error: code = NotFound desc = could not find container \"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34\": container with ID starting with cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34 not found: ID does not exist" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.407917 4942 scope.go:117] "RemoveContainer" containerID="64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d" Feb 18 19:35:33 crc kubenswrapper[4942]: E0218 19:35:33.408229 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d\": container with ID starting with 64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d not found: ID does not exist" containerID="64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.408284 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d"} err="failed to get container status \"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d\": rpc error: code = NotFound desc = could not find container \"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d\": container with ID starting with 64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d not found: ID does not exist" Feb 18 19:35:34 crc kubenswrapper[4942]: I0218 19:35:34.327379 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerStarted","Data":"7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01"} Feb 18 19:35:34 crc kubenswrapper[4942]: I0218 19:35:34.331508 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerStarted","Data":"aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b"} Feb 18 19:35:34 crc kubenswrapper[4942]: I0218 19:35:34.354181 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.354158484 podStartE2EDuration="8.354158484s" podCreationTimestamp="2026-02-18 19:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:34.344667728 +0000 UTC m=+1094.049600393" watchObservedRunningTime="2026-02-18 19:35:34.354158484 +0000 UTC m=+1094.059091149" Feb 18 19:35:35 crc kubenswrapper[4942]: I0218 19:35:35.048022 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" path="/var/lib/kubelet/pods/f354be6c-0a53-41b2-923d-60de99a6ed65/volumes" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.349472 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.349824 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.383200 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.389051 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.813008 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.813313 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.858336 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.871977 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:37 crc kubenswrapper[4942]: I0218 19:35:37.357309 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:35:37 crc kubenswrapper[4942]: I0218 19:35:37.357605 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:37 crc kubenswrapper[4942]: I0218 19:35:37.357625 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:37 crc kubenswrapper[4942]: I0218 19:35:37.357635 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.382801 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerStarted","Data":"a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817"} Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.383434 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.386106 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": dial tcp 10.217.0.172:9322: connect: connection refused" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.413574 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=10.413555809 podStartE2EDuration="10.413555809s" podCreationTimestamp="2026-02-18 19:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:39.412179093 +0000 UTC m=+1099.117111788" watchObservedRunningTime="2026-02-18 19:35:39.413555809 +0000 UTC m=+1099.118488474" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.529976 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.158:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.158:8443: connect: connection refused" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.617753 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b6b6597b8-m8ngr" podUID="55d24776-2d1c-413a-8ba1-06cdadf63d04" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.654587 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.658563 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.658657 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.720477 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.720522 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.033998 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.409316 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"e9b5326c-208f-40ba-b395-8a6cf6b52399","Type":"ContainerStarted","Data":"a6d830c5dfd1037f768c2a7f3ca288e31e574b8af1c450ae54908c2a7cf2a5bf"} Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.413963 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2kjs" event={"ID":"8aeac097-ba93-4859-a14f-839ae1421e28","Type":"ContainerStarted","Data":"4d566d8d0c1f2395dae51975108188a50f273b881992f487f3b84531a9f2e9f1"} Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.417039 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3"} Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.427500 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.038048685 podStartE2EDuration="11.427482296s" podCreationTimestamp="2026-02-18 19:35:29 +0000 UTC" firstStartedPulling="2026-02-18 19:35:30.391541519 +0000 UTC m=+1090.096474184" lastFinishedPulling="2026-02-18 19:35:39.78097513 +0000 UTC m=+1099.485907795" observedRunningTime="2026-02-18 19:35:40.42688576 +0000 UTC m=+1100.131818425" watchObservedRunningTime="2026-02-18 19:35:40.427482296 +0000 UTC m=+1100.132414961" Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.448455 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h2kjs" podStartSLOduration=2.905393158 podStartE2EDuration="51.44843879s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="2026-02-18 19:34:51.219193791 +0000 UTC m=+1050.924126456" lastFinishedPulling="2026-02-18 19:35:39.762239423 +0000 UTC m=+1099.467172088" observedRunningTime="2026-02-18 19:35:40.444676213 +0000 UTC m=+1100.149608888" watchObservedRunningTime="2026-02-18 19:35:40.44843879 +0000 UTC m=+1100.153371455" Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.762027 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.270396 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.428883 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qvzh5" event={"ID":"8db7f68b-a733-44fc-90b9-a1dd489fb42d","Type":"ContainerStarted","Data":"e0015f6cb0ed0e4e677017a14f5fcb4378f27372b8c41b1fdca89664675f56a0"} Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.431266 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.432047 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9cf66c1e-2f67-4785-85e9-f0b06e578d29","Type":"ContainerStarted","Data":"565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd"} Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.450912 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qvzh5" podStartSLOduration=4.508829439 podStartE2EDuration="52.450894419s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="2026-02-18 19:34:51.311627554 +0000 UTC m=+1051.016560219" lastFinishedPulling="2026-02-18 19:35:39.253692534 +0000 UTC m=+1098.958625199" observedRunningTime="2026-02-18 19:35:41.444885923 +0000 UTC m=+1101.149818588" watchObservedRunningTime="2026-02-18 19:35:41.450894419 +0000 UTC m=+1101.155827084" Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.464978 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=1.7960282520000002 podStartE2EDuration="12.464960934s" podCreationTimestamp="2026-02-18 19:35:29 +0000 UTC" firstStartedPulling="2026-02-18 19:35:30.307273108 +0000 UTC m=+1090.012205773" lastFinishedPulling="2026-02-18 19:35:40.97620578 +0000 UTC m=+1100.681138455" observedRunningTime="2026-02-18 19:35:41.463121867 +0000 UTC m=+1101.168054542" watchObservedRunningTime="2026-02-18 19:35:41.464960934 +0000 UTC m=+1101.169893599" Feb 18 19:35:42 crc kubenswrapper[4942]: I0218 19:35:42.530751 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:35:44 crc kubenswrapper[4942]: I0218 19:35:44.687939 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:35:44 crc kubenswrapper[4942]: I0218 19:35:44.803895 4942 generic.go:334] "Generic (PLEG): container finished" podID="8aeac097-ba93-4859-a14f-839ae1421e28" containerID="4d566d8d0c1f2395dae51975108188a50f273b881992f487f3b84531a9f2e9f1" exitCode=0 Feb 18 19:35:44 crc kubenswrapper[4942]: I0218 19:35:44.804021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2kjs" event={"ID":"8aeac097-ba93-4859-a14f-839ae1421e28","Type":"ContainerDied","Data":"4d566d8d0c1f2395dae51975108188a50f273b881992f487f3b84531a9f2e9f1"} Feb 18 19:35:46 crc kubenswrapper[4942]: I0218 19:35:46.828055 4942 generic.go:334] "Generic (PLEG): container finished" podID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" containerID="e0015f6cb0ed0e4e677017a14f5fcb4378f27372b8c41b1fdca89664675f56a0" exitCode=0 Feb 18 19:35:46 crc kubenswrapper[4942]: I0218 19:35:46.828147 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qvzh5" event={"ID":"8db7f68b-a733-44fc-90b9-a1dd489fb42d","Type":"ContainerDied","Data":"e0015f6cb0ed0e4e677017a14f5fcb4378f27372b8c41b1fdca89664675f56a0"} Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.308994 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.313611 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350170 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") pod \"8aeac097-ba93-4859-a14f-839ae1421e28\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350214 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") pod \"8aeac097-ba93-4859-a14f-839ae1421e28\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350238 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") pod \"8aeac097-ba93-4859-a14f-839ae1421e28\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350272 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350293 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350317 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350334 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350399 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350439 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.353021 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.363943 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts" (OuterVolumeSpecName: "scripts") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.364098 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp" (OuterVolumeSpecName: "kube-api-access-kdzfp") pod "8aeac097-ba93-4859-a14f-839ae1421e28" (UID: "8aeac097-ba93-4859-a14f-839ae1421e28"). InnerVolumeSpecName "kube-api-access-kdzfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.367991 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.378022 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6" (OuterVolumeSpecName: "kube-api-access-z75l6") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "kube-api-access-z75l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.381749 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8aeac097-ba93-4859-a14f-839ae1421e28" (UID: "8aeac097-ba93-4859-a14f-839ae1421e28"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.395014 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.396952 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8aeac097-ba93-4859-a14f-839ae1421e28" (UID: "8aeac097-ba93-4859-a14f-839ae1421e28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.417916 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data" (OuterVolumeSpecName: "config-data") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452636 4942 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452663 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452673 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452684 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452692 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452700 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452710 4942 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452718 4942 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452726 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.622992 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.659312 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.688023 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.714266 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.736199 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.741242 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.867359 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.867405 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2kjs" event={"ID":"8aeac097-ba93-4859-a14f-839ae1421e28","Type":"ContainerDied","Data":"e12d1b9fecda9ebe7bb6c836765d71cc803f359fe9c297ce1d8263fb74f3fe1c"} Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.868221 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12d1b9fecda9ebe7bb6c836765d71cc803f359fe9c297ce1d8263fb74f3fe1c" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.869663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qvzh5" event={"ID":"8db7f68b-a733-44fc-90b9-a1dd489fb42d","Type":"ContainerDied","Data":"e6bd17d6977af834a72bbf74bee36179b26553390413854446805a67a2e12afa"} Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.869700 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6bd17d6977af834a72bbf74bee36179b26553390413854446805a67a2e12afa" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.869891 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.872272 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a"} Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.872460 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-central-agent" containerID="cri-o://36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" gracePeriod=30 Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.872691 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.872969 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="proxy-httpd" containerID="cri-o://3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" gracePeriod=30 Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.873064 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="sg-core" containerID="cri-o://cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" gracePeriod=30 Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.873089 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.873117 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-notification-agent" containerID="cri-o://e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" gracePeriod=30 Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.921634 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.843364486 podStartE2EDuration="59.921610388s" podCreationTimestamp="2026-02-18 19:34:50 +0000 UTC" firstStartedPulling="2026-02-18 19:34:51.257100617 +0000 UTC m=+1050.962033282" lastFinishedPulling="2026-02-18 19:35:49.335346519 +0000 UTC m=+1109.040279184" observedRunningTime="2026-02-18 19:35:49.909044822 +0000 UTC m=+1109.613977497" watchObservedRunningTime="2026-02-18 19:35:49.921610388 +0000 UTC m=+1109.626543053" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.959212 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.964953 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555183 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-ffd58675f-h7jk6"] Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.555618 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" containerName="cinder-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555629 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" containerName="cinder-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.555661 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" containerName="barbican-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555667 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" containerName="barbican-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.555675 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="init" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555681 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="init" Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.555690 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="dnsmasq-dns" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555695 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="dnsmasq-dns" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555891 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" containerName="barbican-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555903 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" containerName="cinder-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555917 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="dnsmasq-dns" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.556824 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.569440 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.572503 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qg5fj" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.577394 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5987dd846-f7dd9"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.578316 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.579161 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.583109 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.606429 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5987dd846-f7dd9"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.623822 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-ffd58675f-h7jk6"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680431 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwb2\" (UniqueName: \"kubernetes.io/projected/0e207482-f349-415e-86d3-800b0caf9a78-kube-api-access-bwwb2\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680484 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680506 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db966aef-0b18-400a-b3e8-49487308bf05-logs\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680563 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680577 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-combined-ca-bundle\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680596 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data-custom\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680615 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e207482-f349-415e-86d3-800b0caf9a78-logs\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680634 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6xvm\" (UniqueName: \"kubernetes.io/projected/db966aef-0b18-400a-b3e8-49487308bf05-kube-api-access-n6xvm\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680669 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-combined-ca-bundle\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680694 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data-custom\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.747227 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.748687 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.775814 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.783825 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data-custom\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.783917 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwwb2\" (UniqueName: \"kubernetes.io/projected/0e207482-f349-415e-86d3-800b0caf9a78-kube-api-access-bwwb2\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.783947 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.783966 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db966aef-0b18-400a-b3e8-49487308bf05-logs\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784020 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784037 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-combined-ca-bundle\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784056 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data-custom\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784072 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e207482-f349-415e-86d3-800b0caf9a78-logs\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784091 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6xvm\" (UniqueName: \"kubernetes.io/projected/db966aef-0b18-400a-b3e8-49487308bf05-kube-api-access-n6xvm\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-combined-ca-bundle\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.792017 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db966aef-0b18-400a-b3e8-49487308bf05-logs\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.797534 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e207482-f349-415e-86d3-800b0caf9a78-logs\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.797672 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-combined-ca-bundle\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.800650 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.800866 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data-custom\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.807396 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data-custom\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.818970 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.819873 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.820374 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.823315 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-combined-ca-bundle\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.828322 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwwb2\" (UniqueName: \"kubernetes.io/projected/0e207482-f349-415e-86d3-800b0caf9a78-kube-api-access-bwwb2\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.856193 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rhdz8" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.856390 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.856492 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.856610 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.857971 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.872403 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6xvm\" (UniqueName: \"kubernetes.io/projected/db966aef-0b18-400a-b3e8-49487308bf05-kube-api-access-n6xvm\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.875960 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887393 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887485 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887523 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887556 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887579 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887600 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.895924 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.965366 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.966003 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-v9h6h ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" podUID="25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971097 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4517368-322e-4467-b31a-45b487e1035b" containerID="3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" exitCode=0 Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971128 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4517368-322e-4467-b31a-45b487e1035b" containerID="cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" exitCode=2 Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971136 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4517368-322e-4467-b31a-45b487e1035b" containerID="36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" exitCode=0 Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971705 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a"} Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971735 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3"} Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971746 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407"} Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992158 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992230 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992256 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992279 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992297 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992354 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992377 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992410 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992426 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992447 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992470 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992492 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.993279 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.993811 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.994337 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.007934 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.009271 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.037379 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094480 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094567 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094593 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094673 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094737 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094778 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.098513 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.098978 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.098993 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.100234 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.100306 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.107677 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.112874 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.118871 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.120799 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.133237 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.139938 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.163772 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.181803 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198580 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198857 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198883 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198937 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198957 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.199018 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200011 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200037 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200078 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200094 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200112 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.228057 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.230630 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.242729 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.280021 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.306211 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307124 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307161 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307189 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307211 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307233 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307250 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.308831 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.308933 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.308994 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309036 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309159 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309185 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309277 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309325 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309439 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309470 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309559 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309593 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309601 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.310282 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.308838 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.310316 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.311146 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.312163 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.318730 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.326366 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.330593 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.332194 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.334417 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.415621 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.415976 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416072 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416137 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416181 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416324 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.418989 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.424124 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.431147 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.431211 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.431370 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.431775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.438252 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.451241 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.480245 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.568350 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.694568 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-ffd58675f-h7jk6"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.802428 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.810492 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5987dd846-f7dd9"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.928745 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.987056 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" event={"ID":"db966aef-0b18-400a-b3e8-49487308bf05","Type":"ContainerStarted","Data":"475c72870f27467190ac06d0fda886059a940e78ec7c42749f579ef22ae8d000"} Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.991935 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ffd58675f-h7jk6" event={"ID":"0e207482-f349-415e-86d3-800b0caf9a78","Type":"ContainerStarted","Data":"bf40e2824d87c3428f6357a33e3b4dc50ec72ed94982cd5e48acd2a1a672ca8d"} Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.995942 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerStarted","Data":"0e634a244135542433fea3600e46692e4afcef32f4f22d2c4274a7c75eb4af2b"} Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.996044 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.012331 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.028487 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.028774 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-api" containerID="cri-o://5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9" gracePeriod=30 Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.028942 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" containerID="cri-o://531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795" gracePeriod=30 Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.044283 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": read tcp 10.217.0.2:58106->10.217.0.166:9696: read: connection reset by peer" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.056010 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.068507 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.083414 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-555cb4cc6f-xh69m"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.087082 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.092832 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-555cb4cc6f-xh69m"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132441 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132494 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132533 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132590 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132639 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132671 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.133726 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.134205 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config" (OuterVolumeSpecName: "config") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.134217 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.134613 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.134753 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.138043 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h" (OuterVolumeSpecName: "kube-api-access-v9h6h") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "kube-api-access-v9h6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.194915 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.234746 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-combined-ca-bundle\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.234927 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-internal-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.234968 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.234993 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-ovndb-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235028 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-httpd-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235077 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb48j\" (UniqueName: \"kubernetes.io/projected/fb5df9b1-974d-4c39-9278-b79355109acb-kube-api-access-lb48j\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235107 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-public-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235212 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235225 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235238 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235250 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235262 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235272 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.339957 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340004 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-ovndb-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340047 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-httpd-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340108 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb48j\" (UniqueName: \"kubernetes.io/projected/fb5df9b1-974d-4c39-9278-b79355109acb-kube-api-access-lb48j\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340143 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-public-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340246 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-combined-ca-bundle\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340331 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-internal-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.345183 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-combined-ca-bundle\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.345850 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.350379 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-internal-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.353367 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-public-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.357059 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-ovndb-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.365838 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-httpd-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.378591 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb48j\" (UniqueName: \"kubernetes.io/projected/fb5df9b1-974d-4c39-9278-b79355109acb-kube-api-access-lb48j\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.405796 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.677917 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.022009 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-555cb4cc6f-xh69m"] Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.078462 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerStarted","Data":"3311d8cddbe87c83a85f89c5d8660e6aa1ae4c9bc3dfb708ff87ff00d9bd9163"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.084729 4942 generic.go:334] "Generic (PLEG): container finished" podID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerID="dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd" exitCode=0 Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.085714 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerDied","Data":"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.085752 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerStarted","Data":"e8c28553c41794bb048bb9a7187c1a1ab7f1585b41b9526ea5b0ab594f5efa4f"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.095230 4942 generic.go:334] "Generic (PLEG): container finished" podID="921d1a28-ead8-42a6-933c-38a339741884" containerID="531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795" exitCode=0 Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.095318 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerDied","Data":"531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.098791 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.100466 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101294 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerStarted","Data":"ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101320 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101332 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerStarted","Data":"2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101342 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerStarted","Data":"508f30ffb0657c1e039b8b11a78534bab62a7a31f3ad591584cdc61bbaa73274"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101358 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.144345 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9d65dd5d-c4zgj" podStartSLOduration=2.14430182 podStartE2EDuration="2.14430182s" podCreationTimestamp="2026-02-18 19:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:53.131638381 +0000 UTC m=+1112.836571046" watchObservedRunningTime="2026-02-18 19:35:53.14430182 +0000 UTC m=+1112.849234485" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.201004 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.230856 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.124281 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555cb4cc6f-xh69m" event={"ID":"fb5df9b1-974d-4c39-9278-b79355109acb","Type":"ContainerStarted","Data":"df10b2e774d393aaf9e8c0074048e695a19eea0844a790d5e500020634603cb1"} Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.128335 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.141753 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4517368-322e-4467-b31a-45b487e1035b" containerID="e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" exitCode=0 Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.141827 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd"} Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.141853 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"6813065f5777b4af8dd89f8c25333785bb85a450b21a1a7ab93d214ca1b8049c"} Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.141871 4942 scope.go:117] "RemoveContainer" containerID="3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.159271 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerStarted","Data":"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff"} Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204314 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204398 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204450 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204527 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204589 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204669 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204693 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204715 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": dial tcp 10.217.0.166:9696: connect: connection refused" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.224354 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.224625 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.226352 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.230918 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs" (OuterVolumeSpecName: "kube-api-access-68tzs") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "kube-api-access-68tzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.277230 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts" (OuterVolumeSpecName: "scripts") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.315598 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.315627 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.315635 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.315645 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.343022 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.394342 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.417947 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.417979 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.462359 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data" (OuterVolumeSpecName: "config-data") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.519530 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.519666 4942 scope.go:117] "RemoveContainer" containerID="cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.546287 4942 scope.go:117] "RemoveContainer" containerID="e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.574462 4942 scope.go:117] "RemoveContainer" containerID="36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.595502 4942 scope.go:117] "RemoveContainer" containerID="3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" Feb 18 19:35:54 crc kubenswrapper[4942]: E0218 19:35:54.595984 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a\": container with ID starting with 3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a not found: ID does not exist" containerID="3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596035 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a"} err="failed to get container status \"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a\": rpc error: code = NotFound desc = could not find container \"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a\": container with ID starting with 3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a not found: ID does not exist" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596062 4942 scope.go:117] "RemoveContainer" containerID="cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" Feb 18 19:35:54 crc kubenswrapper[4942]: E0218 19:35:54.596440 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3\": container with ID starting with cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3 not found: ID does not exist" containerID="cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596476 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3"} err="failed to get container status \"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3\": rpc error: code = NotFound desc = could not find container \"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3\": container with ID starting with cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3 not found: ID does not exist" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596497 4942 scope.go:117] "RemoveContainer" containerID="e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" Feb 18 19:35:54 crc kubenswrapper[4942]: E0218 19:35:54.596691 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd\": container with ID starting with e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd not found: ID does not exist" containerID="e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596947 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd"} err="failed to get container status \"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd\": rpc error: code = NotFound desc = could not find container \"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd\": container with ID starting with e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd not found: ID does not exist" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596963 4942 scope.go:117] "RemoveContainer" containerID="36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" Feb 18 19:35:54 crc kubenswrapper[4942]: E0218 19:35:54.597262 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407\": container with ID starting with 36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407 not found: ID does not exist" containerID="36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.597293 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407"} err="failed to get container status \"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407\": rpc error: code = NotFound desc = could not find container \"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407\": container with ID starting with 36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407 not found: ID does not exist" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.010331 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.045839 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" path="/var/lib/kubelet/pods/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13/volumes" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.173130 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ffd58675f-h7jk6" event={"ID":"0e207482-f349-415e-86d3-800b0caf9a78","Type":"ContainerStarted","Data":"82f116b1fadc9ee58ff0d0e721f26d25af42e5fdecc0bceaf15464762fe8cbb8"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.173405 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ffd58675f-h7jk6" event={"ID":"0e207482-f349-415e-86d3-800b0caf9a78","Type":"ContainerStarted","Data":"88c9bd2cae0182f053bfe66d65099b836ca1dbef4ceef315043341409b84d63c"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.177989 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerStarted","Data":"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.178498 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.181302 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" event={"ID":"db966aef-0b18-400a-b3e8-49487308bf05","Type":"ContainerStarted","Data":"96bc861a905a28c4a6e6caa9c8098b62f50f8ee4ae1714468f6fa925f5792b05"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.181336 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" event={"ID":"db966aef-0b18-400a-b3e8-49487308bf05","Type":"ContainerStarted","Data":"a17ade9db8f76f59cdc5fa928b002e53b11d6359c565eba8567629af4417fdbf"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.183361 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555cb4cc6f-xh69m" event={"ID":"fb5df9b1-974d-4c39-9278-b79355109acb","Type":"ContainerStarted","Data":"028284f4d716a71cf9e550a8b8a0724dad91e7b8f56a396013671a08337d28f3"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.189637 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.195398 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-ffd58675f-h7jk6" podStartSLOduration=2.710985648 podStartE2EDuration="5.195377517s" podCreationTimestamp="2026-02-18 19:35:50 +0000 UTC" firstStartedPulling="2026-02-18 19:35:51.70780626 +0000 UTC m=+1111.412738925" lastFinishedPulling="2026-02-18 19:35:54.192198129 +0000 UTC m=+1113.897130794" observedRunningTime="2026-02-18 19:35:55.188230351 +0000 UTC m=+1114.893163036" watchObservedRunningTime="2026-02-18 19:35:55.195377517 +0000 UTC m=+1114.900310182" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.222091 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.233899 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" podStartSLOduration=2.849130488 podStartE2EDuration="5.233879608s" podCreationTimestamp="2026-02-18 19:35:50 +0000 UTC" firstStartedPulling="2026-02-18 19:35:51.808468546 +0000 UTC m=+1111.513401221" lastFinishedPulling="2026-02-18 19:35:54.193217676 +0000 UTC m=+1113.898150341" observedRunningTime="2026-02-18 19:35:55.231138606 +0000 UTC m=+1114.936071281" watchObservedRunningTime="2026-02-18 19:35:55.233879608 +0000 UTC m=+1114.938812273" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.307899 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" podStartSLOduration=5.307873421 podStartE2EDuration="5.307873421s" podCreationTimestamp="2026-02-18 19:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:55.275503759 +0000 UTC m=+1114.980436444" watchObservedRunningTime="2026-02-18 19:35:55.307873421 +0000 UTC m=+1115.012806086" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.378935 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.465084 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.486993 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:55 crc kubenswrapper[4942]: E0218 19:35:55.487490 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-central-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.487512 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-central-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: E0218 19:35:55.487532 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="proxy-httpd" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.487540 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="proxy-httpd" Feb 18 19:35:55 crc kubenswrapper[4942]: E0218 19:35:55.487555 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-notification-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.487880 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-notification-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: E0218 19:35:55.487926 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="sg-core" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.487934 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="sg-core" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.488136 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="sg-core" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.488161 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-central-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.488176 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="proxy-httpd" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.488195 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-notification-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.490229 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.502127 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.502229 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.538827 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.538917 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.538935 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.538964 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.539010 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.539031 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.539063 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.583187 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.623430 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.623730 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon-log" containerID="cri-o://036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659" gracePeriod=30 Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.624248 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" containerID="cri-o://4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874" gracePeriod=30 Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.646964 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647012 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647081 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647183 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647217 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647285 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.648836 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.649506 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.669407 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.679538 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.680383 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.689444 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.701793 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.873506 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.213202 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerStarted","Data":"ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c"} Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.221167 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerStarted","Data":"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d"} Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.221318 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api-log" containerID="cri-o://06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" gracePeriod=30 Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.221394 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.221772 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api" containerID="cri-o://125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" gracePeriod=30 Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.237371 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555cb4cc6f-xh69m" event={"ID":"fb5df9b1-974d-4c39-9278-b79355109acb","Type":"ContainerStarted","Data":"209f9439a272fd3644d3f8d4eb1b3879920c1b7b426b77d83f71a0d05d16e49e"} Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.239533 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.274115 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.274094106 podStartE2EDuration="5.274094106s" podCreationTimestamp="2026-02-18 19:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:56.24767341 +0000 UTC m=+1115.952606075" watchObservedRunningTime="2026-02-18 19:35:56.274094106 +0000 UTC m=+1115.979026771" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.277315 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-555cb4cc6f-xh69m" podStartSLOduration=4.27730205 podStartE2EDuration="4.27730205s" podCreationTimestamp="2026-02-18 19:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:56.274042985 +0000 UTC m=+1115.978975650" watchObservedRunningTime="2026-02-18 19:35:56.27730205 +0000 UTC m=+1115.982234715" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.345065 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:56 crc kubenswrapper[4942]: W0218 19:35:56.349319 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb08df0a_0162_4e04_a641_6fd65af9048b.slice/crio-7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083 WatchSource:0}: Error finding container 7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083: Status 404 returned error can't find the container with id 7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083 Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.793473 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871373 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871499 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871573 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871597 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871647 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871676 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871713 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.873897 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.874266 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs" (OuterVolumeSpecName: "logs") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.889673 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52" (OuterVolumeSpecName: "kube-api-access-wdj52") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "kube-api-access-wdj52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.905914 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.906054 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts" (OuterVolumeSpecName: "scripts") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.926938 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976132 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976448 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976457 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976467 4942 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976477 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976485 4942 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.977866 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data" (OuterVolumeSpecName: "config-data") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.048329 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4517368-322e-4467-b31a-45b487e1035b" path="/var/lib/kubelet/pods/e4517368-322e-4467-b31a-45b487e1035b/volumes" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.078056 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248075 4942 generic.go:334] "Generic (PLEG): container finished" podID="82d59804-3d83-4594-855b-f08b93e146a4" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" exitCode=0 Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248106 4942 generic.go:334] "Generic (PLEG): container finished" podID="82d59804-3d83-4594-855b-f08b93e146a4" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" exitCode=143 Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248151 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerDied","Data":"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248167 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248190 4942 scope.go:117] "RemoveContainer" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248178 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerDied","Data":"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248350 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerDied","Data":"3311d8cddbe87c83a85f89c5d8660e6aa1ae4c9bc3dfb708ff87ff00d9bd9163"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.250665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.250689 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.254692 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerStarted","Data":"24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.286565 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.154593301 podStartE2EDuration="7.286551435s" podCreationTimestamp="2026-02-18 19:35:50 +0000 UTC" firstStartedPulling="2026-02-18 19:35:51.928965319 +0000 UTC m=+1111.633897984" lastFinishedPulling="2026-02-18 19:35:53.060923453 +0000 UTC m=+1112.765856118" observedRunningTime="2026-02-18 19:35:57.286435232 +0000 UTC m=+1116.991367897" watchObservedRunningTime="2026-02-18 19:35:57.286551435 +0000 UTC m=+1116.991484100" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.307416 4942 scope.go:117] "RemoveContainer" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.311905 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.324634 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.381511 4942 scope.go:117] "RemoveContainer" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" Feb 18 19:35:57 crc kubenswrapper[4942]: E0218 19:35:57.387920 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": container with ID starting with 125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d not found: ID does not exist" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.387979 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d"} err="failed to get container status \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": rpc error: code = NotFound desc = could not find container \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": container with ID starting with 125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d not found: ID does not exist" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.388010 4942 scope.go:117] "RemoveContainer" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" Feb 18 19:35:57 crc kubenswrapper[4942]: E0218 19:35:57.392713 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": container with ID starting with 06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff not found: ID does not exist" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.392756 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff"} err="failed to get container status \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": rpc error: code = NotFound desc = could not find container \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": container with ID starting with 06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff not found: ID does not exist" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.392796 4942 scope.go:117] "RemoveContainer" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.392896 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:57 crc kubenswrapper[4942]: E0218 19:35:57.393279 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api-log" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.393297 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api-log" Feb 18 19:35:57 crc kubenswrapper[4942]: E0218 19:35:57.393321 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.393328 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.393535 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api-log" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.393555 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.394575 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.394860 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d"} err="failed to get container status \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": rpc error: code = NotFound desc = could not find container \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": container with ID starting with 125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d not found: ID does not exist" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.394942 4942 scope.go:117] "RemoveContainer" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.409935 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff"} err="failed to get container status \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": rpc error: code = NotFound desc = could not find container \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": container with ID starting with 06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff not found: ID does not exist" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.410242 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.410294 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.410808 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.420754 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485107 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-scripts\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485368 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485399 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-logs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485419 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plv8b\" (UniqueName: \"kubernetes.io/projected/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-kube-api-access-plv8b\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485456 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485472 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485506 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485526 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485561 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.527814 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7466887594-rv5fb"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.529378 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.537276 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.537505 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.557832 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7466887594-rv5fb"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589813 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589870 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-scripts\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589905 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589939 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589972 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-logs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589990 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plv8b\" (UniqueName: \"kubernetes.io/projected/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-kube-api-access-plv8b\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590036 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590055 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e4caaf-033e-499f-ba62-77297ea9bf09-logs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-public-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590088 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590104 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-internal-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590140 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6rsp\" (UniqueName: \"kubernetes.io/projected/46e4caaf-033e-499f-ba62-77297ea9bf09-kube-api-access-d6rsp\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590165 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590188 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-combined-ca-bundle\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590216 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590254 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data-custom\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.603132 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-logs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.603216 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.605357 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-scripts\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.605431 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.605716 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.619391 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.628009 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.629264 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.629405 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plv8b\" (UniqueName: \"kubernetes.io/projected/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-kube-api-access-plv8b\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.691962 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692068 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e4caaf-033e-499f-ba62-77297ea9bf09-logs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692087 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-public-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692103 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-internal-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692133 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6rsp\" (UniqueName: \"kubernetes.io/projected/46e4caaf-033e-499f-ba62-77297ea9bf09-kube-api-access-d6rsp\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692157 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-combined-ca-bundle\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692190 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data-custom\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.697293 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e4caaf-033e-499f-ba62-77297ea9bf09-logs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.697511 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data-custom\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.700428 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-combined-ca-bundle\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.702358 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-internal-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.702543 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-public-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.702810 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.716270 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6rsp\" (UniqueName: \"kubernetes.io/projected/46e4caaf-033e-499f-ba62-77297ea9bf09-kube-api-access-d6rsp\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.740505 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.004814 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.307016 4942 generic.go:334] "Generic (PLEG): container finished" podID="921d1a28-ead8-42a6-933c-38a339741884" containerID="5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9" exitCode=0 Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.307288 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerDied","Data":"5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9"} Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.333638 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be"} Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.350977 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.624854 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7466887594-rv5fb"] Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.722425 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826649 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826724 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826836 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826921 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826977 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.827046 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.827072 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.835922 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.846951 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj" (OuterVolumeSpecName: "kube-api-access-4hnxj") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "kube-api-access-4hnxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.928772 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.928795 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.969638 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.991490 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config" (OuterVolumeSpecName: "config") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.021921 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.027189 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.034463 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.034495 4942 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.034504 4942 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.048536 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.050649 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d59804-3d83-4594-855b-f08b93e146a4" path="/var/lib/kubelet/pods/82d59804-3d83-4594-855b-f08b93e146a4/volumes" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.061819 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.137152 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.137193 4942 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.345598 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerDied","Data":"66f57c246570cb64775a601036f5870a5885605c57cb8be2088eae510c596f8b"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.345645 4942 scope.go:117] "RemoveContainer" containerID="531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.345803 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.353143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.355361 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33","Type":"ContainerStarted","Data":"ad038fc1ee429cd544c0e75765ad1ed5a8e87869e90710cfa255fd8624784168"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.367080 4942 generic.go:334] "Generic (PLEG): container finished" podID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerID="4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874" exitCode=0 Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.367197 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerDied","Data":"4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.368972 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7466887594-rv5fb" event={"ID":"46e4caaf-033e-499f-ba62-77297ea9bf09","Type":"ContainerStarted","Data":"4ffa8e9432b6d4ce19fa4001c63b4c9090479b3a053a642b9e8553aa1018e9d7"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.369021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7466887594-rv5fb" event={"ID":"46e4caaf-033e-499f-ba62-77297ea9bf09","Type":"ContainerStarted","Data":"f1833c1ccdd12413f87fcdc260f82fdab94d8363b1c89c2dbb4056950bdcb7cf"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.370311 4942 scope.go:117] "RemoveContainer" containerID="5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.391206 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.398555 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.527294 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.158:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.158:8443: connect: connection refused" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.163980 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.380732 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7466887594-rv5fb" event={"ID":"46e4caaf-033e-499f-ba62-77297ea9bf09","Type":"ContainerStarted","Data":"496841a8a2bcd2e51d01f58997e1633d5cebae01d7a3e67cfc824c322b9f302a"} Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.381806 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.381833 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.390559 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33","Type":"ContainerStarted","Data":"fe39f5bff16e5bbe2053562d9c1cf7bbf6bd07f8ce8109ac820b957d380fec40"} Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.390594 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33","Type":"ContainerStarted","Data":"810ebbebb7330da8a4a8589f64ea283676bd49a0b492e606a4519a0883509167"} Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.391131 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.412998 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9bf555976-zxfhl"] Feb 18 19:36:00 crc kubenswrapper[4942]: E0218 19:36:00.413368 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-api" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.413384 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-api" Feb 18 19:36:00 crc kubenswrapper[4942]: E0218 19:36:00.413405 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.413412 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.413595 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.413615 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-api" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.414549 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.420119 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7466887594-rv5fb" podStartSLOduration=3.420102179 podStartE2EDuration="3.420102179s" podCreationTimestamp="2026-02-18 19:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:00.412331857 +0000 UTC m=+1120.117264522" watchObservedRunningTime="2026-02-18 19:36:00.420102179 +0000 UTC m=+1120.125034844" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.452948 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9bf555976-zxfhl"] Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.468540 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.468520738 podStartE2EDuration="3.468520738s" podCreationTimestamp="2026-02-18 19:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:00.43745918 +0000 UTC m=+1120.142391845" watchObservedRunningTime="2026-02-18 19:36:00.468520738 +0000 UTC m=+1120.173453403" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487490 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19290621-80f0-4d8b-b200-d3cce6889538-logs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487530 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-internal-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487550 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7l6m\" (UniqueName: \"kubernetes.io/projected/19290621-80f0-4d8b-b200-d3cce6889538-kube-api-access-p7l6m\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487570 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-combined-ca-bundle\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487623 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-public-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487663 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-scripts\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487807 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-config-data\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.589869 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19290621-80f0-4d8b-b200-d3cce6889538-logs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.589955 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-internal-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.589985 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7l6m\" (UniqueName: \"kubernetes.io/projected/19290621-80f0-4d8b-b200-d3cce6889538-kube-api-access-p7l6m\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590018 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-combined-ca-bundle\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590079 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-public-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590129 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-scripts\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590250 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-config-data\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590352 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19290621-80f0-4d8b-b200-d3cce6889538-logs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.599956 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-config-data\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.601403 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-internal-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.605270 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-scripts\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.607376 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-public-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.613544 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-combined-ca-bundle\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.618250 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7l6m\" (UniqueName: \"kubernetes.io/projected/19290621-80f0-4d8b-b200-d3cce6889538-kube-api-access-p7l6m\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.655344 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.731803 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.060308 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921d1a28-ead8-42a6-933c-38a339741884" path="/var/lib/kubelet/pods/921d1a28-ead8-42a6-933c-38a339741884/volumes" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.181150 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9bf555976-zxfhl"] Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.281061 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.400490 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bf555976-zxfhl" event={"ID":"19290621-80f0-4d8b-b200-d3cce6889538","Type":"ContainerStarted","Data":"3a879eb8eca35f47b596b651b0ea4eff90e88803d3978b94f1b13f1e9a9997ca"} Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.454951 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.564805 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.565028 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" containerID="cri-o://07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" gracePeriod=10 Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.777182 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: connect: connection refused" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.809830 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.873119 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.170490 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.170819 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" containerID="cri-o://aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b" gracePeriod=30 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.171385 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" containerID="cri-o://a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817" gracePeriod=30 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.225269 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356054 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356370 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356470 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356486 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356544 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356566 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.369958 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76" (OuterVolumeSpecName: "kube-api-access-7gq76") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "kube-api-access-7gq76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445137 4942 generic.go:334] "Generic (PLEG): container finished" podID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerID="07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" exitCode=0 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445224 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerDied","Data":"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445250 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerDied","Data":"a28152676e5bbeaa52dbf0acfa190644662ce9fce2d0b5f7310504317b4faf82"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445265 4942 scope.go:117] "RemoveContainer" containerID="07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445389 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.458462 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.458612 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.462958 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.463912 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.467222 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.468392 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.487342 4942 generic.go:334] "Generic (PLEG): container finished" podID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerID="aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b" exitCode=143 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.487429 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerDied","Data":"aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498054 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="cinder-scheduler" containerID="cri-o://ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c" gracePeriod=30 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498627 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bf555976-zxfhl" event={"ID":"19290621-80f0-4d8b-b200-d3cce6889538","Type":"ContainerStarted","Data":"ab7f32936c01a7d49bbf7291938fdcdda17569c42b2beb30aa56125c0e8689d4"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498712 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498729 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bf555976-zxfhl" event={"ID":"19290621-80f0-4d8b-b200-d3cce6889538","Type":"ContainerStarted","Data":"318cdd790ce9717634ea5f676a2cb9d466b36b0a3d6579495f195b9ff09ceada"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498821 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.499179 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="probe" containerID="cri-o://24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26" gracePeriod=30 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.507937 4942 scope.go:117] "RemoveContainer" containerID="9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.526252 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.529680 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config" (OuterVolumeSpecName: "config") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.539117 4942 scope.go:117] "RemoveContainer" containerID="07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" Feb 18 19:36:02 crc kubenswrapper[4942]: E0218 19:36:02.541803 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587\": container with ID starting with 07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587 not found: ID does not exist" containerID="07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.541876 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587"} err="failed to get container status \"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587\": rpc error: code = NotFound desc = could not find container \"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587\": container with ID starting with 07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587 not found: ID does not exist" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.541904 4942 scope.go:117] "RemoveContainer" containerID="9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe" Feb 18 19:36:02 crc kubenswrapper[4942]: E0218 19:36:02.542485 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe\": container with ID starting with 9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe not found: ID does not exist" containerID="9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.542563 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe"} err="failed to get container status \"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe\": rpc error: code = NotFound desc = could not find container \"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe\": container with ID starting with 9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe not found: ID does not exist" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.552865 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.365925938 podStartE2EDuration="7.552840929s" podCreationTimestamp="2026-02-18 19:35:55 +0000 UTC" firstStartedPulling="2026-02-18 19:35:56.351526079 +0000 UTC m=+1116.056458744" lastFinishedPulling="2026-02-18 19:36:01.53844107 +0000 UTC m=+1121.243373735" observedRunningTime="2026-02-18 19:36:02.511129705 +0000 UTC m=+1122.216062370" watchObservedRunningTime="2026-02-18 19:36:02.552840929 +0000 UTC m=+1122.257773594" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.554118 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9bf555976-zxfhl" podStartSLOduration=2.554110112 podStartE2EDuration="2.554110112s" podCreationTimestamp="2026-02-18 19:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:02.538097035 +0000 UTC m=+1122.243029700" watchObservedRunningTime="2026-02-18 19:36:02.554110112 +0000 UTC m=+1122.259042777" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560542 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560567 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560576 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560609 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560622 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.786091 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.800892 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.047477 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" path="/var/lib/kubelet/pods/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd/volumes" Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.490147 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.498749 4942 generic.go:334] "Generic (PLEG): container finished" podID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerID="24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26" exitCode=0 Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.498801 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerDied","Data":"24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26"} Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.651400 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.769440 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.511954 4942 generic.go:334] "Generic (PLEG): container finished" podID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerID="ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c" exitCode=0 Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.512105 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerDied","Data":"ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c"} Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.846992 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921222 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921273 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921347 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921403 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921457 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921527 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.922075 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.937278 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr" (OuterVolumeSpecName: "kube-api-access-nb8kr") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "kube-api-access-nb8kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.937498 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.939875 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts" (OuterVolumeSpecName: "scripts") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.026571 4942 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.026603 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.026613 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.026622 4942 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.047862 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data" (OuterVolumeSpecName: "config-data") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.047936 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.057963 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.129454 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.129673 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.335902 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": read tcp 10.217.0.2:50166->10.217.0.172:9322: read: connection reset by peer" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.335964 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": read tcp 10.217.0.2:50150->10.217.0.172:9322: read: connection reset by peer" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.523004 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerDied","Data":"0e634a244135542433fea3600e46692e4afcef32f4f22d2c4274a7c75eb4af2b"} Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.523055 4942 scope.go:117] "RemoveContainer" containerID="24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.523183 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.528714 4942 generic.go:334] "Generic (PLEG): container finished" podID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerID="a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817" exitCode=0 Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.528866 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerDied","Data":"a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817"} Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.555961 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.584594 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.601831 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:05 crc kubenswrapper[4942]: E0218 19:36:05.602524 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="probe" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602544 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="probe" Feb 18 19:36:05 crc kubenswrapper[4942]: E0218 19:36:05.602574 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="init" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602580 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="init" Feb 18 19:36:05 crc kubenswrapper[4942]: E0218 19:36:05.602616 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="cinder-scheduler" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602622 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="cinder-scheduler" Feb 18 19:36:05 crc kubenswrapper[4942]: E0218 19:36:05.602650 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602657 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602883 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602900 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="probe" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602933 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="cinder-scheduler" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.604191 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.607046 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.607095 4942 scope.go:117] "RemoveContainer" containerID="ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.619586 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638141 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638176 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638194 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638218 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rs4l\" (UniqueName: \"kubernetes.io/projected/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-kube-api-access-5rs4l\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741736 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741794 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741816 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741844 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rs4l\" (UniqueName: \"kubernetes.io/projected/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-kube-api-access-5rs4l\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741864 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741907 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.746679 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.746990 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.750739 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.755020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.755328 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.789483 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rs4l\" (UniqueName: \"kubernetes.io/projected/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-kube-api-access-5rs4l\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.818198 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944656 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944787 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944832 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944851 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944906 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.945459 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs" (OuterVolumeSpecName: "logs") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.948595 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7" (OuterVolumeSpecName: "kube-api-access-485k7") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "kube-api-access-485k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.952955 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.984673 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.003521 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.008854 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data" (OuterVolumeSpecName: "config-data") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046553 4942 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046581 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046589 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046600 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046610 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: W0218 19:36:06.490931 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7ce79f4_8fac_499d_aa4d_1ca6b2b50259.slice/crio-0b8007bb2b22198f3e91f17ff9f81ad24951fa3b38c0d678886241682b40539e WatchSource:0}: Error finding container 0b8007bb2b22198f3e91f17ff9f81ad24951fa3b38c0d678886241682b40539e: Status 404 returned error can't find the container with id 0b8007bb2b22198f3e91f17ff9f81ad24951fa3b38c0d678886241682b40539e Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.498731 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.555713 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerDied","Data":"796b3cc6f87bbc8cea79f9f672a04a291cbb2f04782a6f0d27d4592a418cd947"} Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.555775 4942 scope.go:117] "RemoveContainer" containerID="a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.555864 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.563303 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerStarted","Data":"0b8007bb2b22198f3e91f17ff9f81ad24951fa3b38c0d678886241682b40539e"} Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.624921 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.629971 4942 scope.go:117] "RemoveContainer" containerID="aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.652834 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.660552 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: E0218 19:36:06.660928 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.660943 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" Feb 18 19:36:06 crc kubenswrapper[4942]: E0218 19:36:06.660951 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.660957 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.661140 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.661164 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.662072 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.668239 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.668535 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.668676 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.702816 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762772 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkvd\" (UniqueName: \"kubernetes.io/projected/618db7e3-a45b-472e-8341-bce342277a17-kube-api-access-5qkvd\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762820 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/618db7e3-a45b-472e-8341-bce342277a17-logs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762862 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-public-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762880 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-config-data\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762912 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762936 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.801632 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.802969 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.812512 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.812570 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lmtd6" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.812707 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.850108 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864081 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config-secret\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864122 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qbj4\" (UniqueName: \"kubernetes.io/projected/38accb89-093d-4b4b-b098-b4f73a4bb561-kube-api-access-6qbj4\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864153 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qkvd\" (UniqueName: \"kubernetes.io/projected/618db7e3-a45b-472e-8341-bce342277a17-kube-api-access-5qkvd\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864178 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/618db7e3-a45b-472e-8341-bce342277a17-logs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864223 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-public-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864241 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-config-data\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864262 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864283 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864305 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864328 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864349 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.865068 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/618db7e3-a45b-472e-8341-bce342277a17-logs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.869860 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-public-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.871742 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.872061 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.873644 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-config-data\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.874103 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.879989 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qkvd\" (UniqueName: \"kubernetes.io/projected/618db7e3-a45b-472e-8341-bce342277a17-kube-api-access-5qkvd\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.966463 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.966535 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.966618 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config-secret\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.966639 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qbj4\" (UniqueName: \"kubernetes.io/projected/38accb89-093d-4b4b-b098-b4f73a4bb561-kube-api-access-6qbj4\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.967648 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.974383 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config-secret\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.974937 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.990786 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.996995 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qbj4\" (UniqueName: \"kubernetes.io/projected/38accb89-093d-4b4b-b098-b4f73a4bb561-kube-api-access-6qbj4\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.078412 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" path="/var/lib/kubelet/pods/17399208-02d7-46c9-b5ea-b01563e8baf1/volumes" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.080628 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" path="/var/lib/kubelet/pods/cf325d20-c507-42cc-b96f-6e57ff55aa53/volumes" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.115573 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.143098 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.174608 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.174888 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9d65dd5d-c4zgj" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api-log" containerID="cri-o://2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c" gracePeriod=30 Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.175356 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9d65dd5d-c4zgj" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api" containerID="cri-o://ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e" gracePeriod=30 Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.596993 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerStarted","Data":"a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560"} Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.612153 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerID="2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c" exitCode=143 Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.612216 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerDied","Data":"2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c"} Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.661998 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.705472 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:07 crc kubenswrapper[4942]: W0218 19:36:07.708178 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38accb89_093d_4b4b_b098_b4f73a4bb561.slice/crio-ff1cdb0b7e3b37cce93a44da8103dcaa016bfadf422c68a610fa69a6094f9cc2 WatchSource:0}: Error finding container ff1cdb0b7e3b37cce93a44da8103dcaa016bfadf422c68a610fa69a6094f9cc2: Status 404 returned error can't find the container with id ff1cdb0b7e3b37cce93a44da8103dcaa016bfadf422c68a610fa69a6094f9cc2 Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.631097 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"618db7e3-a45b-472e-8341-bce342277a17","Type":"ContainerStarted","Data":"823c265199a899046da3b6d893152513a99c4c8a0856243e6a7ab9e94e7c5d6b"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.631693 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"618db7e3-a45b-472e-8341-bce342277a17","Type":"ContainerStarted","Data":"f4180aa5041d1857f75582c9a3b1a76fe76e0ec02072654dd06aa9a911fc7f4e"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.631715 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.631725 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"618db7e3-a45b-472e-8341-bce342277a17","Type":"ContainerStarted","Data":"cadeec1713837beea98b89d34f97a59e57ba43023eb28c9ee03066e955fb17d2"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.633503 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="618db7e3-a45b-472e-8341-bce342277a17" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.187:9322/\": dial tcp 10.217.0.187:9322: connect: connection refused" Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.644029 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerStarted","Data":"f125ab975ab7eb97174e78d37f99220d729200ee72a3e7255f597efca4a8defc"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.650478 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"38accb89-093d-4b4b-b098-b4f73a4bb561","Type":"ContainerStarted","Data":"ff1cdb0b7e3b37cce93a44da8103dcaa016bfadf422c68a610fa69a6094f9cc2"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.667791 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.6677535409999997 podStartE2EDuration="2.667753541s" podCreationTimestamp="2026-02-18 19:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:08.659320782 +0000 UTC m=+1128.364253487" watchObservedRunningTime="2026-02-18 19:36:08.667753541 +0000 UTC m=+1128.372686206" Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.688221 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.688193713 podStartE2EDuration="3.688193713s" podCreationTimestamp="2026-02-18 19:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:08.680689858 +0000 UTC m=+1128.385622523" watchObservedRunningTime="2026-02-18 19:36:08.688193713 +0000 UTC m=+1128.393126388" Feb 18 19:36:09 crc kubenswrapper[4942]: I0218 19:36:09.527199 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.158:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.158:8443: connect: connection refused" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.188294 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.690094 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerID="ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e" exitCode=0 Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.690141 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerDied","Data":"ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e"} Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.848521 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.954251 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993538 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993585 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993618 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993681 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993780 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.994205 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs" (OuterVolumeSpecName: "logs") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.994699 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.000674 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.006261 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg" (OuterVolumeSpecName: "kube-api-access-92ffg") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "kube-api-access-92ffg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.050725 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.067642 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data" (OuterVolumeSpecName: "config-data") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.097550 4942 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.097590 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.097602 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.097615 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.701785 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerDied","Data":"508f30ffb0657c1e039b8b11a78534bab62a7a31f3ad591584cdc61bbaa73274"} Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.701839 4942 scope.go:117] "RemoveContainer" containerID="ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.701972 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.731350 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.738261 4942 scope.go:117] "RemoveContainer" containerID="2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.742824 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.992566 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:36:12 crc kubenswrapper[4942]: I0218 19:36:12.032038 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.051329 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" path="/var/lib/kubelet/pods/e4cc3ba2-abea-4fa2-9272-65ac8721c87d/volumes" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627051 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-b6f54bc7f-8lcdv"] Feb 18 19:36:13 crc kubenswrapper[4942]: E0218 19:36:13.627446 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api-log" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627462 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api-log" Feb 18 19:36:13 crc kubenswrapper[4942]: E0218 19:36:13.627487 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627495 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627683 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627702 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api-log" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.630472 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.636201 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.636752 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.636799 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.639446 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-b6f54bc7f-8lcdv"] Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.656901 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-config-data\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.656987 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-internal-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657042 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-log-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657069 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-run-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657088 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-combined-ca-bundle\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657115 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slqrd\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-kube-api-access-slqrd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657185 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-public-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657212 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-etc-swift\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759638 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-internal-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759740 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-log-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759792 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-combined-ca-bundle\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759815 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-run-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759844 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slqrd\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-kube-api-access-slqrd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759912 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-public-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759943 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-etc-swift\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.760016 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-config-data\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.760650 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-run-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.760874 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-log-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.765353 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-combined-ca-bundle\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.766588 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-public-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.771905 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-etc-swift\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.772503 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-config-data\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.772553 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-internal-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.784496 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slqrd\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-kube-api-access-slqrd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.950871 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:16 crc kubenswrapper[4942]: I0218 19:36:16.145153 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:36:16 crc kubenswrapper[4942]: I0218 19:36:16.992034 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:36:17 crc kubenswrapper[4942]: I0218 19:36:17.002166 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:36:17 crc kubenswrapper[4942]: I0218 19:36:17.771448 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.510839 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-b6f54bc7f-8lcdv"] Feb 18 19:36:18 crc kubenswrapper[4942]: W0218 19:36:18.511442 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb50976a2_9059_4076_8a11_9c86c8b49070.slice/crio-db0f50882f211bb3b4947a29515f0070d719cad9af603748c03e2cd23af0f1c4 WatchSource:0}: Error finding container db0f50882f211bb3b4947a29515f0070d719cad9af603748c03e2cd23af0f1c4: Status 404 returned error can't find the container with id db0f50882f211bb3b4947a29515f0070d719cad9af603748c03e2cd23af0f1c4 Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.778464 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" event={"ID":"b50976a2-9059-4076-8a11-9c86c8b49070","Type":"ContainerStarted","Data":"8c43d49e7d7c3a5091c20a931fc541f39b912cacdcb12e7dc5468bd791862b57"} Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.778813 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" event={"ID":"b50976a2-9059-4076-8a11-9c86c8b49070","Type":"ContainerStarted","Data":"db0f50882f211bb3b4947a29515f0070d719cad9af603748c03e2cd23af0f1c4"} Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.781882 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"38accb89-093d-4b4b-b098-b4f73a4bb561","Type":"ContainerStarted","Data":"cf37ca18e3ccd543dae3900379bb0f942e5bece6b1bf367bc113b0672e408fc5"} Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.800021 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.531448059 podStartE2EDuration="12.800003382s" podCreationTimestamp="2026-02-18 19:36:06 +0000 UTC" firstStartedPulling="2026-02-18 19:36:07.7178738 +0000 UTC m=+1127.422806455" lastFinishedPulling="2026-02-18 19:36:17.986429103 +0000 UTC m=+1137.691361778" observedRunningTime="2026-02-18 19:36:18.796450989 +0000 UTC m=+1138.501383654" watchObservedRunningTime="2026-02-18 19:36:18.800003382 +0000 UTC m=+1138.504936047" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.527949 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.158:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.158:8443: connect: connection refused" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.528643 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.800252 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" event={"ID":"b50976a2-9059-4076-8a11-9c86c8b49070","Type":"ContainerStarted","Data":"8aada01431394c4f8ce3a99a4f95b7f95f34d7bf452f92879db8c95b52c42e03"} Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.800348 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.800396 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.824245 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" podStartSLOduration=6.824225686 podStartE2EDuration="6.824225686s" podCreationTimestamp="2026-02-18 19:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:19.821100825 +0000 UTC m=+1139.526033500" watchObservedRunningTime="2026-02-18 19:36:19.824225686 +0000 UTC m=+1139.529158351" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.969995 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.970404 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-central-agent" containerID="cri-o://ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994" gracePeriod=30 Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.971033 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" containerID="cri-o://33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56" gracePeriod=30 Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.971226 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="sg-core" containerID="cri-o://9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8" gracePeriod=30 Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.971323 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-notification-agent" containerID="cri-o://28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be" gracePeriod=30 Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.078555 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.182:3000/\": read tcp 10.217.0.2:44782->10.217.0.182:3000: read: connection reset by peer" Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810185 4942 generic.go:334] "Generic (PLEG): container finished" podID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerID="33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56" exitCode=0 Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810501 4942 generic.go:334] "Generic (PLEG): container finished" podID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerID="9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8" exitCode=2 Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810515 4942 generic.go:334] "Generic (PLEG): container finished" podID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerID="ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994" exitCode=0 Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810266 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56"} Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810608 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8"} Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810627 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994"} Feb 18 19:36:21 crc kubenswrapper[4942]: I0218 19:36:21.824591 4942 generic.go:334] "Generic (PLEG): container finished" podID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerID="28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be" exitCode=0 Feb 18 19:36:21 crc kubenswrapper[4942]: I0218 19:36:21.824817 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be"} Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.058084 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227234 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227310 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227395 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227492 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227542 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227592 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227970 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.228820 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.228953 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.229332 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.229361 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.236962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5" (OuterVolumeSpecName: "kube-api-access-2plf5") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "kube-api-access-2plf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.240332 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts" (OuterVolumeSpecName: "scripts") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.273635 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.317012 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.331271 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.331313 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.331324 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.331333 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.354070 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data" (OuterVolumeSpecName: "config-data") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.421906 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.432819 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.482292 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.482564 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cc44d6c6-sp59w" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-api" containerID="cri-o://8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be" gracePeriod=30 Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.483098 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cc44d6c6-sp59w" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-httpd" containerID="cri-o://686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc" gracePeriod=30 Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.834817 4942 generic.go:334] "Generic (PLEG): container finished" podID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerID="686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc" exitCode=0 Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.834901 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerDied","Data":"686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc"} Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.838775 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083"} Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.838833 4942 scope.go:117] "RemoveContainer" containerID="33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.838836 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.877331 4942 scope.go:117] "RemoveContainer" containerID="9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.922242 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.933993 4942 scope.go:117] "RemoveContainer" containerID="28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.947829 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.960380 4942 scope.go:117] "RemoveContainer" containerID="ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967090 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:22 crc kubenswrapper[4942]: E0218 19:36:22.967578 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="sg-core" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967601 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="sg-core" Feb 18 19:36:22 crc kubenswrapper[4942]: E0218 19:36:22.967620 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967628 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" Feb 18 19:36:22 crc kubenswrapper[4942]: E0218 19:36:22.967645 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-notification-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967655 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-notification-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: E0218 19:36:22.967684 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-central-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967692 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-central-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967930 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967945 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="sg-core" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967961 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-central-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967975 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-notification-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.983265 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.990751 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.991029 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.995818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.051101 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" path="/var/lib/kubelet/pods/cb08df0a-0162-4e04-a641-6fd65af9048b/volumes" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.051803 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.054613 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.065253 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.129090 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.130419 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.137541 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.138859 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.141971 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147358 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147443 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147474 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147508 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147540 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147556 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147595 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147613 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147642 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.148742 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.158437 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249638 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249685 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249711 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249731 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249776 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249791 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249809 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249829 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249880 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249930 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249981 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.250009 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.250917 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.251034 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.251470 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.254974 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.255368 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.257632 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.264777 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.270900 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.277430 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.321432 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.323298 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.324444 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.337029 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.352298 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.352367 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.352464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.352557 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.353687 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.354287 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.354486 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.355919 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.358285 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.373412 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.374916 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.381358 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.386841 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.436514 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.438099 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.442197 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.454079 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.454147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.454225 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.454252 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.455437 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.460135 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.475802 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.561613 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.561974 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.562010 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.562060 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.562141 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.562190 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.564329 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.564916 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.586995 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.611303 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.669568 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.669777 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.670799 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.706750 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.764192 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.775677 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.812218 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.977791 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.978534 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.100035 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:24 crc kubenswrapper[4942]: W0218 19:36:24.124814 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod696aecc5_9837_4941_a9e2_06c1743b6983.slice/crio-307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13 WatchSource:0}: Error finding container 307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13: Status 404 returned error can't find the container with id 307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13 Feb 18 19:36:24 crc kubenswrapper[4942]: W0218 19:36:24.398942 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54e11ed4_f85e_4125_acc8_b0b86cef91fb.slice/crio-ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e WatchSource:0}: Error finding container ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e: Status 404 returned error can't find the container with id ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.401596 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.525106 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:36:24 crc kubenswrapper[4942]: W0218 19:36:24.536350 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef4ca914_d763_484f_aa35_39dbd725d14c.slice/crio-ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68 WatchSource:0}: Error finding container ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68: Status 404 returned error can't find the container with id ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68 Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.544740 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.700048 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.723819 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.733220 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:36:24 crc kubenswrapper[4942]: W0218 19:36:24.767000 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdd3a7b9_5bb1_47a4_8a4a_95131e50cf27.slice/crio-fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e WatchSource:0}: Error finding container fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e: Status 404 returned error can't find the container with id fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.950006 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f9r9j" event={"ID":"de103e96-857c-4fa9-b78b-51c8f4734643","Type":"ContainerStarted","Data":"b394124733feae208cca8678a899da21a42cd1a1bcdab470215b05da69af051d"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.967385 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.984479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdjn" event={"ID":"54e11ed4-f85e-4125-acc8-b0b86cef91fb","Type":"ContainerStarted","Data":"9f2c359e5e4f7ba110dc92287a82c170423f21670d64c2a6b420aa0beff96ce3"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.984530 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdjn" event={"ID":"54e11ed4-f85e-4125-acc8-b0b86cef91fb","Type":"ContainerStarted","Data":"ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.987943 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" event={"ID":"908017b2-bbca-42f2-b6a0-af358a18d1b7","Type":"ContainerStarted","Data":"527d26ceec3f8972dda44cae7e3560073a290e058817bf5b7e32fe2b65220c1d"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.991853 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7fm8" event={"ID":"3319773b-d924-402a-adbd-f421ee14c994","Type":"ContainerStarted","Data":"4ee086e7e747f10b7d38270d86480864775d35a33a827da89168941ff41e3484"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.991931 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7fm8" event={"ID":"3319773b-d924-402a-adbd-f421ee14c994","Type":"ContainerStarted","Data":"64ececf7ff8da7e27ae9a12795f5871d5ca9a079d17366752709256c0742b5ba"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.999735 4942 generic.go:334] "Generic (PLEG): container finished" podID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerID="8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be" exitCode=0 Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.999831 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerDied","Data":"8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be"} Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.002792 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" event={"ID":"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27","Type":"ContainerStarted","Data":"fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e"} Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.006631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f195-account-create-update-jjctk" event={"ID":"ef4ca914-d763-484f-aa35-39dbd725d14c","Type":"ContainerStarted","Data":"c2c74965083b09d2fda5c205fdee24ab8d991088f20cd6c4fd29973dbc9a7c39"} Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.006668 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f195-account-create-update-jjctk" event={"ID":"ef4ca914-d763-484f-aa35-39dbd725d14c","Type":"ContainerStarted","Data":"ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68"} Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.019281 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-d7fm8" podStartSLOduration=2.019260487 podStartE2EDuration="2.019260487s" podCreationTimestamp="2026-02-18 19:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:25.014930204 +0000 UTC m=+1144.719862869" watchObservedRunningTime="2026-02-18 19:36:25.019260487 +0000 UTC m=+1144.724193162" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.046269 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-f195-account-create-update-jjctk" podStartSLOduration=2.0462509779999998 podStartE2EDuration="2.046250978s" podCreationTimestamp="2026-02-18 19:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:25.030609762 +0000 UTC m=+1144.735542427" watchObservedRunningTime="2026-02-18 19:36:25.046250978 +0000 UTC m=+1144.751183643" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.274199 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.449935 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.537711 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.538152 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.538179 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.538244 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.538342 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.544864 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9" (OuterVolumeSpecName: "kube-api-access-znrs9") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "kube-api-access-znrs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.546355 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.621721 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.622022 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" containerID="cri-o://af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" gracePeriod=30 Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.622124 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" containerID="cri-o://0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" gracePeriod=30 Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.642169 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.642199 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.643892 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.643975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.666117 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config" (OuterVolumeSpecName: "config") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.745469 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.745513 4942 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.745522 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.019703 4942 generic.go:334] "Generic (PLEG): container finished" podID="3319773b-d924-402a-adbd-f421ee14c994" containerID="4ee086e7e747f10b7d38270d86480864775d35a33a827da89168941ff41e3484" exitCode=0 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.019752 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7fm8" event={"ID":"3319773b-d924-402a-adbd-f421ee14c994","Type":"ContainerDied","Data":"4ee086e7e747f10b7d38270d86480864775d35a33a827da89168941ff41e3484"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.021539 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" event={"ID":"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27","Type":"ContainerStarted","Data":"12651ed44c362c43a5a615685457fd590c1593f4afa3ac50fda9dea54a2e1f71"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.024535 4942 generic.go:334] "Generic (PLEG): container finished" podID="de103e96-857c-4fa9-b78b-51c8f4734643" containerID="a09c56da144b09bdcb7865a7cc27a2ff95e7937bd4f16a766144008dd1c49144" exitCode=0 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.024880 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f9r9j" event={"ID":"de103e96-857c-4fa9-b78b-51c8f4734643","Type":"ContainerDied","Data":"a09c56da144b09bdcb7865a7cc27a2ff95e7937bd4f16a766144008dd1c49144"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.027667 4942 generic.go:334] "Generic (PLEG): container finished" podID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerID="af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" exitCode=143 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.027882 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerDied","Data":"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.055260 4942 generic.go:334] "Generic (PLEG): container finished" podID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" containerID="9f2c359e5e4f7ba110dc92287a82c170423f21670d64c2a6b420aa0beff96ce3" exitCode=0 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.055347 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdjn" event={"ID":"54e11ed4-f85e-4125-acc8-b0b86cef91fb","Type":"ContainerDied","Data":"9f2c359e5e4f7ba110dc92287a82c170423f21670d64c2a6b420aa0beff96ce3"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.065192 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" event={"ID":"908017b2-bbca-42f2-b6a0-af358a18d1b7","Type":"ContainerStarted","Data":"866788c6c2a051f7476fcb5d58fd9c13e62810bec69e94d021b4616590e98f0b"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.082987 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerDied","Data":"16cfdf5777da304074f8658c0e294de7985ac237e0c31312cdfc21ceef0ca88c"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.083063 4942 scope.go:117] "RemoveContainer" containerID="686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.083208 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.091733 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" podStartSLOduration=3.091710415 podStartE2EDuration="3.091710415s" podCreationTimestamp="2026-02-18 19:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:26.068634605 +0000 UTC m=+1145.773567280" watchObservedRunningTime="2026-02-18 19:36:26.091710415 +0000 UTC m=+1145.796643080" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.100889 4942 generic.go:334] "Generic (PLEG): container finished" podID="ef4ca914-d763-484f-aa35-39dbd725d14c" containerID="c2c74965083b09d2fda5c205fdee24ab8d991088f20cd6c4fd29973dbc9a7c39" exitCode=0 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.101054 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f195-account-create-update-jjctk" event={"ID":"ef4ca914-d763-484f-aa35-39dbd725d14c","Type":"ContainerDied","Data":"c2c74965083b09d2fda5c205fdee24ab8d991088f20cd6c4fd29973dbc9a7c39"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.124714 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" podStartSLOduration=3.124697682 podStartE2EDuration="3.124697682s" podCreationTimestamp="2026-02-18 19:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:26.123153422 +0000 UTC m=+1145.828086087" watchObservedRunningTime="2026-02-18 19:36:26.124697682 +0000 UTC m=+1145.829630347" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.136712 4942 generic.go:334] "Generic (PLEG): container finished" podID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerID="036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659" exitCode=137 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.136786 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerDied","Data":"036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.193468 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.248870 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.259102 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.259590 4942 scope.go:117] "RemoveContainer" containerID="8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.281937 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358218 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358314 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358409 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358525 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358670 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358722 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358922 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.360227 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs" (OuterVolumeSpecName: "logs") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.384948 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.402455 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data" (OuterVolumeSpecName: "config-data") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.404952 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5" (OuterVolumeSpecName: "kube-api-access-dnrb5") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "kube-api-access-dnrb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.418128 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts" (OuterVolumeSpecName: "scripts") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.425903 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.429029 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.437367 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462350 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462397 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462410 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462422 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462434 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462445 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462457 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.563427 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") pod \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.563949 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") pod \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.564560 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54e11ed4-f85e-4125-acc8-b0b86cef91fb" (UID: "54e11ed4-f85e-4125-acc8-b0b86cef91fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.564985 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.566275 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh" (OuterVolumeSpecName: "kube-api-access-mtfvh") pod "54e11ed4-f85e-4125-acc8-b0b86cef91fb" (UID: "54e11ed4-f85e-4125-acc8-b0b86cef91fb"). InnerVolumeSpecName "kube-api-access-mtfvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.667178 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.050756 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" path="/var/lib/kubelet/pods/df34bdbb-8771-4d46-b5ba-29088c793a4c/volumes" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.204140 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerDied","Data":"f9c6502e1e5809e23b3664eb42d069f99f7705e9a66bf07935b4912b98778c64"} Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.204194 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.204214 4942 scope.go:117] "RemoveContainer" containerID="4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.205596 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.205595 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdjn" event={"ID":"54e11ed4-f85e-4125-acc8-b0b86cef91fb","Type":"ContainerDied","Data":"ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e"} Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.205632 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.239169 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.247449 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.009375 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.009991 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-log" containerID="cri-o://1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae" gracePeriod=30 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.010649 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-httpd" containerID="cri-o://7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01" gracePeriod=30 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.079419 4942 scope.go:117] "RemoveContainer" containerID="036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.221952 4942 generic.go:334] "Generic (PLEG): container finished" podID="908017b2-bbca-42f2-b6a0-af358a18d1b7" containerID="866788c6c2a051f7476fcb5d58fd9c13e62810bec69e94d021b4616590e98f0b" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.222322 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" event={"ID":"908017b2-bbca-42f2-b6a0-af358a18d1b7","Type":"ContainerDied","Data":"866788c6c2a051f7476fcb5d58fd9c13e62810bec69e94d021b4616590e98f0b"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.227125 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7fm8" event={"ID":"3319773b-d924-402a-adbd-f421ee14c994","Type":"ContainerDied","Data":"64ececf7ff8da7e27ae9a12795f5871d5ca9a079d17366752709256c0742b5ba"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.227172 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64ececf7ff8da7e27ae9a12795f5871d5ca9a079d17366752709256c0742b5ba" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.240554 4942 generic.go:334] "Generic (PLEG): container finished" podID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerID="1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae" exitCode=143 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.240668 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerDied","Data":"1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.259149 4942 generic.go:334] "Generic (PLEG): container finished" podID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" containerID="12651ed44c362c43a5a615685457fd590c1593f4afa3ac50fda9dea54a2e1f71" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.259260 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" event={"ID":"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27","Type":"ContainerDied","Data":"12651ed44c362c43a5a615685457fd590c1593f4afa3ac50fda9dea54a2e1f71"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.270826 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f195-account-create-update-jjctk" event={"ID":"ef4ca914-d763-484f-aa35-39dbd725d14c","Type":"ContainerDied","Data":"ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.270859 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.285992 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.286037 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f9r9j" event={"ID":"de103e96-857c-4fa9-b78b-51c8f4734643","Type":"ContainerDied","Data":"b394124733feae208cca8678a899da21a42cd1a1bcdab470215b05da69af051d"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.286131 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b394124733feae208cca8678a899da21a42cd1a1bcdab470215b05da69af051d" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.291741 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.293978 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404469 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") pod \"de103e96-857c-4fa9-b78b-51c8f4734643\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404519 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") pod \"3319773b-d924-402a-adbd-f421ee14c994\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404576 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") pod \"3319773b-d924-402a-adbd-f421ee14c994\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404654 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") pod \"ef4ca914-d763-484f-aa35-39dbd725d14c\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404685 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") pod \"de103e96-857c-4fa9-b78b-51c8f4734643\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404851 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") pod \"ef4ca914-d763-484f-aa35-39dbd725d14c\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.405227 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de103e96-857c-4fa9-b78b-51c8f4734643" (UID: "de103e96-857c-4fa9-b78b-51c8f4734643"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.405307 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.405339 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3319773b-d924-402a-adbd-f421ee14c994" (UID: "3319773b-d924-402a-adbd-f421ee14c994"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.405785 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef4ca914-d763-484f-aa35-39dbd725d14c" (UID: "ef4ca914-d763-484f-aa35-39dbd725d14c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.411883 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k" (OuterVolumeSpecName: "kube-api-access-svc8k") pod "de103e96-857c-4fa9-b78b-51c8f4734643" (UID: "de103e96-857c-4fa9-b78b-51c8f4734643"). InnerVolumeSpecName "kube-api-access-svc8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.412082 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq" (OuterVolumeSpecName: "kube-api-access-w65jq") pod "ef4ca914-d763-484f-aa35-39dbd725d14c" (UID: "ef4ca914-d763-484f-aa35-39dbd725d14c"). InnerVolumeSpecName "kube-api-access-w65jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.412221 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6" (OuterVolumeSpecName: "kube-api-access-r8xk6") pod "3319773b-d924-402a-adbd-f421ee14c994" (UID: "3319773b-d924-402a-adbd-f421ee14c994"). InnerVolumeSpecName "kube-api-access-r8xk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507576 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507607 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507615 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507625 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507636 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.793054 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": read tcp 10.217.0.2:51714->10.217.0.167:9292: read: connection reset by peer" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.793127 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": read tcp 10.217.0.2:51712->10.217.0.167:9292: read: connection reset by peer" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.047160 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" path="/var/lib/kubelet/pods/3ecc91e6-4e7f-438f-8530-bb8dd55764c5/volumes" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.311074 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388380 4942 generic.go:334] "Generic (PLEG): container finished" podID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerID="0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" exitCode=0 Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388460 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerDied","Data":"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7"} Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388494 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerDied","Data":"5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c"} Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388513 4942 scope.go:117] "RemoveContainer" containerID="0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388661 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.407625 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.408346 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b"} Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.408374 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84"} Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.408409 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.408886 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.430738 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.431011 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.431479 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.431877 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.431904 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.432236 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.432272 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.432315 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.432346 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.433193 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.435338 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs" (OuterVolumeSpecName: "logs") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.452150 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.452862 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts" (OuterVolumeSpecName: "scripts") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.459109 4942 scope.go:117] "RemoveContainer" containerID="af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.463815 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw" (OuterVolumeSpecName: "kube-api-access-rk7vw") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "kube-api-access-rk7vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.479357 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.534466 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.534839 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: W0218 19:36:29.535647 4942 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2/volumes/kubernetes.io~secret/public-tls-certs Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.535714 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.535894 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.535965 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.536031 4942 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.536084 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.536136 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.536186 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.543514 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data" (OuterVolumeSpecName: "config-data") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.576695 4942 scope.go:117] "RemoveContainer" containerID="0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.582974 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7\": container with ID starting with 0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7 not found: ID does not exist" containerID="0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.583010 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7"} err="failed to get container status \"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7\": rpc error: code = NotFound desc = could not find container \"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7\": container with ID starting with 0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7 not found: ID does not exist" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.583034 4942 scope.go:117] "RemoveContainer" containerID="af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.587728 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2\": container with ID starting with af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2 not found: ID does not exist" containerID="af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.587808 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2"} err="failed to get container status \"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2\": rpc error: code = NotFound desc = could not find container \"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2\": container with ID starting with af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2 not found: ID does not exist" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.594868 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.638208 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.638238 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.757012 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.774883 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.795882 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796296 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-api" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796313 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-api" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796327 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796333 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796346 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796353 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796363 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796369 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796387 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796392 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796414 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de103e96-857c-4fa9-b78b-51c8f4734643" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796422 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="de103e96-857c-4fa9-b78b-51c8f4734643" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796434 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon-log" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796440 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon-log" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796453 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3319773b-d924-402a-adbd-f421ee14c994" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796459 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3319773b-d924-402a-adbd-f421ee14c994" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796466 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796473 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796486 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4ca914-d763-484f-aa35-39dbd725d14c" containerName="mariadb-account-create-update" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796492 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4ca914-d763-484f-aa35-39dbd725d14c" containerName="mariadb-account-create-update" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796658 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4ca914-d763-484f-aa35-39dbd725d14c" containerName="mariadb-account-create-update" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796667 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon-log" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796681 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796690 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3319773b-d924-402a-adbd-f421ee14c994" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796698 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796709 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796716 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="de103e96-857c-4fa9-b78b-51c8f4734643" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796723 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-api" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796732 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796743 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.797691 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.801735 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.801860 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.811808 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.845932 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.895215 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951268 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") pod \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951467 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") pod \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951702 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951733 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-logs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951754 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmp2f\" (UniqueName: \"kubernetes.io/projected/c208165d-3fd9-436b-b964-c2839e67f1f9-kube-api-access-cmp2f\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951799 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951851 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951916 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951943 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951979 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.953639 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" (UID: "bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.966018 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc" (OuterVolumeSpecName: "kube-api-access-xvvfc") pod "bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" (UID: "bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27"). InnerVolumeSpecName "kube-api-access-xvvfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053057 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") pod \"908017b2-bbca-42f2-b6a0-af358a18d1b7\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053255 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") pod \"908017b2-bbca-42f2-b6a0-af358a18d1b7\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053564 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053607 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053670 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053699 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053739 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053808 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053829 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-logs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053847 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmp2f\" (UniqueName: \"kubernetes.io/projected/c208165d-3fd9-436b-b964-c2839e67f1f9-kube-api-access-cmp2f\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053894 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053905 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.054889 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.055055 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.055241 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-logs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.055461 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "908017b2-bbca-42f2-b6a0-af358a18d1b7" (UID: "908017b2-bbca-42f2-b6a0-af358a18d1b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.057868 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798" (OuterVolumeSpecName: "kube-api-access-pv798") pod "908017b2-bbca-42f2-b6a0-af358a18d1b7" (UID: "908017b2-bbca-42f2-b6a0-af358a18d1b7"). InnerVolumeSpecName "kube-api-access-pv798". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.069281 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.069314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.070291 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.070819 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.075396 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmp2f\" (UniqueName: \"kubernetes.io/projected/c208165d-3fd9-436b-b964-c2839e67f1f9-kube-api-access-cmp2f\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.120634 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.155560 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.155597 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.414999 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" event={"ID":"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27","Type":"ContainerDied","Data":"fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e"} Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.415036 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.415086 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.419753 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" event={"ID":"908017b2-bbca-42f2-b6a0-af358a18d1b7","Type":"ContainerDied","Data":"527d26ceec3f8972dda44cae7e3560073a290e058817bf5b7e32fe2b65220c1d"} Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.419804 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527d26ceec3f8972dda44cae7e3560073a290e058817bf5b7e32fe2b65220c1d" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.419822 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.423142 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.074007 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" path="/var/lib/kubelet/pods/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2/volumes" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.079088 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.435140 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c208165d-3fd9-436b-b964-c2839e67f1f9","Type":"ContainerStarted","Data":"9bec0bad1afe526c4e78ff309b48fd1aa33f8d684edcd63d83c8ebbe72fe7dd7"} Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.446598 4942 generic.go:334] "Generic (PLEG): container finished" podID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerID="7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01" exitCode=0 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.446680 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerDied","Data":"7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01"} Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452364 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d"} Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452528 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-central-agent" containerID="cri-o://f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb" gracePeriod=30 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452572 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452604 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="proxy-httpd" containerID="cri-o://dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d" gracePeriod=30 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452661 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-notification-agent" containerID="cri-o://7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84" gracePeriod=30 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452622 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="sg-core" containerID="cri-o://ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b" gracePeriod=30 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.479474 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.981627669 podStartE2EDuration="9.479435865s" podCreationTimestamp="2026-02-18 19:36:22 +0000 UTC" firstStartedPulling="2026-02-18 19:36:24.127785924 +0000 UTC m=+1143.832718579" lastFinishedPulling="2026-02-18 19:36:30.62559411 +0000 UTC m=+1150.330526775" observedRunningTime="2026-02-18 19:36:31.477347391 +0000 UTC m=+1151.182280056" watchObservedRunningTime="2026-02-18 19:36:31.479435865 +0000 UTC m=+1151.184368530" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.787525 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892438 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892573 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892607 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892651 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892693 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892724 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892748 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892789 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.894168 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.894411 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs" (OuterVolumeSpecName: "logs") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.900102 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n" (OuterVolumeSpecName: "kube-api-access-8bv8n") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "kube-api-access-8bv8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.900190 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.900319 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts" (OuterVolumeSpecName: "scripts") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.928443 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.941239 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.966707 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data" (OuterVolumeSpecName: "config-data") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.984537 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.995741 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.995962 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996048 4942 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996126 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996217 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996306 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996398 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996475 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.021650 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.089581 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.101280 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.169703 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.169940 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5794bf846d-82xzg" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-log" containerID="cri-o://f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b" gracePeriod=30 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.170320 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5794bf846d-82xzg" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-api" containerID="cri-o://9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489" gracePeriod=30 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.490991 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c208165d-3fd9-436b-b964-c2839e67f1f9","Type":"ContainerStarted","Data":"82853c4c7a5be022d0766662ed2ed2a6066b1ff3042e37f5caa28d7873e5610f"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.512480 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerDied","Data":"a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.512531 4942 scope.go:117] "RemoveContainer" containerID="7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.512659 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.527798 4942 generic.go:334] "Generic (PLEG): container finished" podID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerID="f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b" exitCode=143 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.527903 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerDied","Data":"f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.560904 4942 scope.go:117] "RemoveContainer" containerID="1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.568192 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.572905 4942 generic.go:334] "Generic (PLEG): container finished" podID="696aecc5-9837-4941-a9e2-06c1743b6983" containerID="dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.572933 4942 generic.go:334] "Generic (PLEG): container finished" podID="696aecc5-9837-4941-a9e2-06c1743b6983" containerID="ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b" exitCode=2 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.572943 4942 generic.go:334] "Generic (PLEG): container finished" podID="696aecc5-9837-4941-a9e2-06c1743b6983" containerID="7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.573840 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.573873 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.573883 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.577927 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.606722 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:32 crc kubenswrapper[4942]: E0218 19:36:32.607094 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607111 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: E0218 19:36:32.607129 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-log" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607134 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-log" Feb 18 19:36:32 crc kubenswrapper[4942]: E0218 19:36:32.607156 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908017b2-bbca-42f2-b6a0-af358a18d1b7" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607163 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="908017b2-bbca-42f2-b6a0-af358a18d1b7" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: E0218 19:36:32.607175 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-httpd" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607181 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-httpd" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607371 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607384 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-log" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607396 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="908017b2-bbca-42f2-b6a0-af358a18d1b7" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607404 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-httpd" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.608316 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.616136 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.616147 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.650107 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716749 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716814 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716838 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716859 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-logs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716888 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716943 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716959 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvwwb\" (UniqueName: \"kubernetes.io/projected/c1669290-6aa1-4a36-8397-a62c14647c13-kube-api-access-fvwwb\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716992 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818248 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818339 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvwwb\" (UniqueName: \"kubernetes.io/projected/c1669290-6aa1-4a36-8397-a62c14647c13-kube-api-access-fvwwb\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818396 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818471 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818505 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818538 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818565 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-logs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.819097 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-logs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.819113 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.819405 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.824086 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.825918 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.826467 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.826555 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.839329 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvwwb\" (UniqueName: \"kubernetes.io/projected/c1669290-6aa1-4a36-8397-a62c14647c13-kube-api-access-fvwwb\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.863340 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.937677 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.061330 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" path="/var/lib/kubelet/pods/5cd0efdc-b208-4270-9c23-33e01f7298be/volumes" Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.514751 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:33 crc kubenswrapper[4942]: W0218 19:36:33.517539 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1669290_6aa1_4a36_8397_a62c14647c13.slice/crio-9144f806c209abb6436b4db321b570024f4498390566722f0745c8f40e0fee98 WatchSource:0}: Error finding container 9144f806c209abb6436b4db321b570024f4498390566722f0745c8f40e0fee98: Status 404 returned error can't find the container with id 9144f806c209abb6436b4db321b570024f4498390566722f0745c8f40e0fee98 Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.619305 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c208165d-3fd9-436b-b964-c2839e67f1f9","Type":"ContainerStarted","Data":"36d0d041423d481c40dd46c8de918565aa453789659b355b6b0e7f64245b52d4"} Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.628167 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c1669290-6aa1-4a36-8397-a62c14647c13","Type":"ContainerStarted","Data":"9144f806c209abb6436b4db321b570024f4498390566722f0745c8f40e0fee98"} Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.656357 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.656340872 podStartE2EDuration="4.656340872s" podCreationTimestamp="2026-02-18 19:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:33.648772455 +0000 UTC m=+1153.353705140" watchObservedRunningTime="2026-02-18 19:36:33.656340872 +0000 UTC m=+1153.361273537" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.005859 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.007326 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.015445 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.015643 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.015738 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r8ppn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.036440 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.047858 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.047913 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.047986 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.048041 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.150074 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.150235 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.150278 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.150425 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.155941 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.156695 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.170386 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.170894 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.329244 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.645719 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c1669290-6aa1-4a36-8397-a62c14647c13","Type":"ContainerStarted","Data":"539326ba24780274bb3169a41e3f0301cbc69d974ca578d45c9e263dd0889740"} Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.862625 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:36:34 crc kubenswrapper[4942]: W0218 19:36:34.870199 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode14c764c_c1b5_4196_a48b_2aff4c38782b.slice/crio-41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055 WatchSource:0}: Error finding container 41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055: Status 404 returned error can't find the container with id 41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055 Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.658182 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c1669290-6aa1-4a36-8397-a62c14647c13","Type":"ContainerStarted","Data":"9ef69375edeb57daae67118a01e73cabffdf8cd2b54e9dc26ca9ce13b9e3aab6"} Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.663306 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" event={"ID":"e14c764c-c1b5-4196-a48b-2aff4c38782b","Type":"ContainerStarted","Data":"41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055"} Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.691897 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.691881184 podStartE2EDuration="3.691881184s" podCreationTimestamp="2026-02-18 19:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:35.681319409 +0000 UTC m=+1155.386252074" watchObservedRunningTime="2026-02-18 19:36:35.691881184 +0000 UTC m=+1155.396813849" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.710219 4942 generic.go:334] "Generic (PLEG): container finished" podID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerID="9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489" exitCode=0 Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.710263 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerDied","Data":"9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489"} Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.788778 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.884887 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885019 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885041 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885084 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885099 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885377 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885400 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.886197 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs" (OuterVolumeSpecName: "logs") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.893913 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br" (OuterVolumeSpecName: "kube-api-access-v89br") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "kube-api-access-v89br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.904104 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts" (OuterVolumeSpecName: "scripts") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.968045 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data" (OuterVolumeSpecName: "config-data") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.994875 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.999461 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.999502 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.999530 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.013170 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.054994 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.102204 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.102234 4942 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.108967 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.204132 4942 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.753270 4942 generic.go:334] "Generic (PLEG): container finished" podID="696aecc5-9837-4941-a9e2-06c1743b6983" containerID="f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb" exitCode=0 Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.753386 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb"} Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.757616 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerDied","Data":"5655340f4bf0abd595b0c47b02dacb9178105661696797fd33a844b3ed3d1922"} Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.757672 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.757672 4942 scope.go:117] "RemoveContainer" containerID="9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.802358 4942 scope.go:117] "RemoveContainer" containerID="f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.812341 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.826454 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.903627 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.919807 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.919902 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.919936 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.919982 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.920119 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.920144 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.920203 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.921361 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.921798 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.924926 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts" (OuterVolumeSpecName: "scripts") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.942360 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj" (OuterVolumeSpecName: "kube-api-access-zwktj") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "kube-api-access-zwktj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.972369 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024686 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024716 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024726 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024735 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024743 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.068976 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" path="/var/lib/kubelet/pods/ab301488-e86d-4ba2-b628-f4ea689acd3b/volumes" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.095975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.096033 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data" (OuterVolumeSpecName: "config-data") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.126455 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.126484 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.773343 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13"} Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.773390 4942 scope.go:117] "RemoveContainer" containerID="dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.773399 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.799396 4942 scope.go:117] "RemoveContainer" containerID="ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.811549 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.820671 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.827651 4942 scope.go:117] "RemoveContainer" containerID="7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843245 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843617 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-log" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843630 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-log" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843647 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-api" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843653 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-api" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843669 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-notification-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843675 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-notification-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843692 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="proxy-httpd" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843697 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="proxy-httpd" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843711 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="sg-core" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843717 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="sg-core" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843729 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-central-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843736 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-central-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843907 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-log" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843918 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-api" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843926 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="proxy-httpd" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843940 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-notification-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843951 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="sg-core" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843960 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-central-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.845601 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.857355 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.857429 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.858008 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.868368 4942 scope.go:117] "RemoveContainer" containerID="f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942005 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942206 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942292 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942354 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942439 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942487 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942632 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044050 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044112 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044138 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044180 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044538 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.045382 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.045493 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.045970 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.049443 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.049456 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.051598 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.055950 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.071671 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.178960 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.507031 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.701587 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.784114 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"e8bdb79b574b6c2621bf8442e6633e45aa4f74b8b682ec57dcc5865cbb5bdecf"} Feb 18 19:36:39 crc kubenswrapper[4942]: I0218 19:36:39.047858 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" path="/var/lib/kubelet/pods/696aecc5-9837-4941-a9e2-06c1743b6983/volumes" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.424071 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.424427 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.457374 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.472954 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.810166 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.810206 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.714431 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.715134 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.938712 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.939047 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.970463 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.985824 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:43 crc kubenswrapper[4942]: I0218 19:36:43.848200 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:43 crc kubenswrapper[4942]: I0218 19:36:43.848245 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:44 crc kubenswrapper[4942]: I0218 19:36:44.445905 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:44 crc kubenswrapper[4942]: I0218 19:36:44.446138 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerName="watcher-decision-engine" containerID="cri-o://565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd" gracePeriod=30 Feb 18 19:36:44 crc kubenswrapper[4942]: I0218 19:36:44.853284 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b"} Feb 18 19:36:44 crc kubenswrapper[4942]: I0218 19:36:44.855037 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" event={"ID":"e14c764c-c1b5-4196-a48b-2aff4c38782b","Type":"ContainerStarted","Data":"ebb11ccd20be89bb58e99f7b4e01c65708315c8dea33a27fefa79d1ee13756e9"} Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.872133 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd"} Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.872422 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546"} Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.934450 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.934536 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.936030 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.964298 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" podStartSLOduration=3.776740416 podStartE2EDuration="12.964280369s" podCreationTimestamp="2026-02-18 19:36:33 +0000 UTC" firstStartedPulling="2026-02-18 19:36:34.874398655 +0000 UTC m=+1154.579331310" lastFinishedPulling="2026-02-18 19:36:44.061938608 +0000 UTC m=+1163.766871263" observedRunningTime="2026-02-18 19:36:44.874457899 +0000 UTC m=+1164.579390564" watchObservedRunningTime="2026-02-18 19:36:45.964280369 +0000 UTC m=+1165.669213034" Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.900603 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1"} Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901037 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-central-agent" containerID="cri-o://58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" gracePeriod=30 Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901209 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="proxy-httpd" containerID="cri-o://df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" gracePeriod=30 Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901247 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="sg-core" containerID="cri-o://3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" gracePeriod=30 Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901279 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-notification-agent" containerID="cri-o://656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" gracePeriod=30 Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901341 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914145 4942 generic.go:334] "Generic (PLEG): container finished" podID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerID="df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" exitCode=0 Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914468 4942 generic.go:334] "Generic (PLEG): container finished" podID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerID="3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" exitCode=2 Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914235 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1"} Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914528 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd"} Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914553 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546"} Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914484 4942 generic.go:334] "Generic (PLEG): container finished" podID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerID="656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" exitCode=0 Feb 18 19:36:50 crc kubenswrapper[4942]: I0218 19:36:50.923123 4942 generic.go:334] "Generic (PLEG): container finished" podID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerID="565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd" exitCode=0 Feb 18 19:36:50 crc kubenswrapper[4942]: I0218 19:36:50.923203 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9cf66c1e-2f67-4785-85e9-f0b06e578d29","Type":"ContainerDied","Data":"565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd"} Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.473105 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.498580 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.020255265 podStartE2EDuration="14.498558927s" podCreationTimestamp="2026-02-18 19:36:37 +0000 UTC" firstStartedPulling="2026-02-18 19:36:38.717696648 +0000 UTC m=+1158.422629303" lastFinishedPulling="2026-02-18 19:36:48.1960003 +0000 UTC m=+1167.900932965" observedRunningTime="2026-02-18 19:36:48.925550358 +0000 UTC m=+1168.630483043" watchObservedRunningTime="2026-02-18 19:36:51.498558927 +0000 UTC m=+1171.203491592" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.520406 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.520996 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.521238 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.521317 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.521419 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.521718 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs" (OuterVolumeSpecName: "logs") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.522746 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.530094 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq" (OuterVolumeSpecName: "kube-api-access-89hpq") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "kube-api-access-89hpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.553451 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.572991 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.576452 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data" (OuterVolumeSpecName: "config-data") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.623998 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.624024 4942 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.624035 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.624044 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.942726 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9cf66c1e-2f67-4785-85e9-f0b06e578d29","Type":"ContainerDied","Data":"e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348"} Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.942805 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.943002 4942 scope.go:117] "RemoveContainer" containerID="565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.994457 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.022797 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.031811 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: E0218 19:36:52.032295 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerName="watcher-decision-engine" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.032322 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerName="watcher-decision-engine" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.032539 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerName="watcher-decision-engine" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.033284 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.035393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.052832 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132483 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132604 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132680 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf4849-8f10-4a95-9168-8933cc67b424-logs\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132704 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95fm\" (UniqueName: \"kubernetes.io/projected/4dcf4849-8f10-4a95-9168-8933cc67b424-kube-api-access-m95fm\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132838 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.234600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.234946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf4849-8f10-4a95-9168-8933cc67b424-logs\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.235080 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95fm\" (UniqueName: \"kubernetes.io/projected/4dcf4849-8f10-4a95-9168-8933cc67b424-kube-api-access-m95fm\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.235229 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.235374 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf4849-8f10-4a95-9168-8933cc67b424-logs\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.235506 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.239795 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.241243 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.242416 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.252611 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95fm\" (UniqueName: \"kubernetes.io/projected/4dcf4849-8f10-4a95-9168-8933cc67b424-kube-api-access-m95fm\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.372658 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.830867 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.963624 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4dcf4849-8f10-4a95-9168-8933cc67b424","Type":"ContainerStarted","Data":"07abe5ef3f3d69bbf6db31ae0d309c288435fa7a22331eb7453098f36ae14d6a"} Feb 18 19:36:53 crc kubenswrapper[4942]: I0218 19:36:53.047207 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" path="/var/lib/kubelet/pods/9cf66c1e-2f67-4785-85e9-f0b06e578d29/volumes" Feb 18 19:36:53 crc kubenswrapper[4942]: I0218 19:36:53.974718 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4dcf4849-8f10-4a95-9168-8933cc67b424","Type":"ContainerStarted","Data":"8e58bab231be9a36ff597ec486b5cf488a59d8a85c6730bc1ba3360922d52b13"} Feb 18 19:36:53 crc kubenswrapper[4942]: I0218 19:36:53.999807 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.999742101 podStartE2EDuration="2.999742101s" podCreationTimestamp="2026-02-18 19:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:53.989749858 +0000 UTC m=+1173.694682523" watchObservedRunningTime="2026-02-18 19:36:53.999742101 +0000 UTC m=+1173.704674806" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.897572 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943626 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943748 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943827 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943875 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943935 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944007 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944092 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944166 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944522 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944588 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.949460 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2" (OuterVolumeSpecName: "kube-api-access-rgml2") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "kube-api-access-rgml2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.951957 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts" (OuterVolumeSpecName: "scripts") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.978392 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022164 4942 generic.go:334] "Generic (PLEG): container finished" podID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerID="58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" exitCode=0 Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022211 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b"} Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022240 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"e8bdb79b574b6c2621bf8442e6633e45aa4f74b8b682ec57dcc5865cbb5bdecf"} Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022262 4942 scope.go:117] "RemoveContainer" containerID="df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022420 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.023906 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.042676 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data" (OuterVolumeSpecName: "config-data") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.045905 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046023 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046086 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046144 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046282 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046350 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.065210 4942 scope.go:117] "RemoveContainer" containerID="3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.082691 4942 scope.go:117] "RemoveContainer" containerID="656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.102321 4942 scope.go:117] "RemoveContainer" containerID="58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.118580 4942 scope.go:117] "RemoveContainer" containerID="df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.119008 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1\": container with ID starting with df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1 not found: ID does not exist" containerID="df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.119048 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1"} err="failed to get container status \"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1\": rpc error: code = NotFound desc = could not find container \"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1\": container with ID starting with df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1 not found: ID does not exist" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.119074 4942 scope.go:117] "RemoveContainer" containerID="3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.119427 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd\": container with ID starting with 3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd not found: ID does not exist" containerID="3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.119466 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd"} err="failed to get container status \"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd\": rpc error: code = NotFound desc = could not find container \"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd\": container with ID starting with 3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd not found: ID does not exist" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.119492 4942 scope.go:117] "RemoveContainer" containerID="656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.119997 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546\": container with ID starting with 656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546 not found: ID does not exist" containerID="656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.120021 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546"} err="failed to get container status \"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546\": rpc error: code = NotFound desc = could not find container \"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546\": container with ID starting with 656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546 not found: ID does not exist" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.120037 4942 scope.go:117] "RemoveContainer" containerID="58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.120377 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b\": container with ID starting with 58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b not found: ID does not exist" containerID="58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.120400 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b"} err="failed to get container status \"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b\": rpc error: code = NotFound desc = could not find container \"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b\": container with ID starting with 58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b not found: ID does not exist" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.370800 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.393349 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.405660 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.406492 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="sg-core" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.406617 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="sg-core" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.406742 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="proxy-httpd" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.406869 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="proxy-httpd" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.406985 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-central-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.407072 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-central-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.407272 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-notification-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.407377 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-notification-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.407719 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="proxy-httpd" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.407870 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-notification-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.408000 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="sg-core" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.408140 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-central-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.410690 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.413409 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.420009 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.441274 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.459975 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460031 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460055 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460097 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460173 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460220 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561274 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561591 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561721 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561748 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561805 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561888 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.562297 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.562502 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.566915 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.567208 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.567916 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.568042 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.582058 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.732335 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.033297 4942 generic.go:334] "Generic (PLEG): container finished" podID="e14c764c-c1b5-4196-a48b-2aff4c38782b" containerID="ebb11ccd20be89bb58e99f7b4e01c65708315c8dea33a27fefa79d1ee13756e9" exitCode=0 Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.033350 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" event={"ID":"e14c764c-c1b5-4196-a48b-2aff4c38782b","Type":"ContainerDied","Data":"ebb11ccd20be89bb58e99f7b4e01c65708315c8dea33a27fefa79d1ee13756e9"} Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.048228 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" path="/var/lib/kubelet/pods/236551c8-9c37-4188-aea0-7ea6cb91c093/volumes" Feb 18 19:36:59 crc kubenswrapper[4942]: W0218 19:36:59.163910 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4981b67f_ebf1_4d2e_a717_67edbc242474.slice/crio-30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932 WatchSource:0}: Error finding container 30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932: Status 404 returned error can't find the container with id 30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932 Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.166542 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.167002 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.047692 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e"} Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.047794 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932"} Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.364930 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.399070 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") pod \"e14c764c-c1b5-4196-a48b-2aff4c38782b\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.399211 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") pod \"e14c764c-c1b5-4196-a48b-2aff4c38782b\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.399287 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") pod \"e14c764c-c1b5-4196-a48b-2aff4c38782b\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.399381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") pod \"e14c764c-c1b5-4196-a48b-2aff4c38782b\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.423142 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz" (OuterVolumeSpecName: "kube-api-access-hzdtz") pod "e14c764c-c1b5-4196-a48b-2aff4c38782b" (UID: "e14c764c-c1b5-4196-a48b-2aff4c38782b"). InnerVolumeSpecName "kube-api-access-hzdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.423600 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts" (OuterVolumeSpecName: "scripts") pod "e14c764c-c1b5-4196-a48b-2aff4c38782b" (UID: "e14c764c-c1b5-4196-a48b-2aff4c38782b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.427471 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data" (OuterVolumeSpecName: "config-data") pod "e14c764c-c1b5-4196-a48b-2aff4c38782b" (UID: "e14c764c-c1b5-4196-a48b-2aff4c38782b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.444642 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e14c764c-c1b5-4196-a48b-2aff4c38782b" (UID: "e14c764c-c1b5-4196-a48b-2aff4c38782b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.501352 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.501383 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.501392 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.501401 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.096277 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55"} Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.103417 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" event={"ID":"e14c764c-c1b5-4196-a48b-2aff4c38782b","Type":"ContainerDied","Data":"41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055"} Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.103459 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.103522 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.158389 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:37:01 crc kubenswrapper[4942]: E0218 19:37:01.158862 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14c764c-c1b5-4196-a48b-2aff4c38782b" containerName="nova-cell0-conductor-db-sync" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.158887 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14c764c-c1b5-4196-a48b-2aff4c38782b" containerName="nova-cell0-conductor-db-sync" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.159114 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14c764c-c1b5-4196-a48b-2aff4c38782b" containerName="nova-cell0-conductor-db-sync" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.159933 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.162476 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.162694 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r8ppn" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.181230 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.224878 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.224923 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.225008 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q92t\" (UniqueName: \"kubernetes.io/projected/7ec2114e-697d-44fd-ae1e-4da66730e456-kube-api-access-5q92t\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.326989 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.327052 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.327126 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q92t\" (UniqueName: \"kubernetes.io/projected/7ec2114e-697d-44fd-ae1e-4da66730e456-kube-api-access-5q92t\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.331496 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.332655 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.369418 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q92t\" (UniqueName: \"kubernetes.io/projected/7ec2114e-697d-44fd-ae1e-4da66730e456-kube-api-access-5q92t\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.484979 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.942671 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:37:02 crc kubenswrapper[4942]: I0218 19:37:02.115286 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0"} Feb 18 19:37:02 crc kubenswrapper[4942]: I0218 19:37:02.117694 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7ec2114e-697d-44fd-ae1e-4da66730e456","Type":"ContainerStarted","Data":"c6ad23870e2b3fe26fcaa7290d1d42830d8faaa7fc9577845df69d239d211029"} Feb 18 19:37:02 crc kubenswrapper[4942]: I0218 19:37:02.374319 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:37:02 crc kubenswrapper[4942]: I0218 19:37:02.410222 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:37:03 crc kubenswrapper[4942]: I0218 19:37:03.136540 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7ec2114e-697d-44fd-ae1e-4da66730e456","Type":"ContainerStarted","Data":"04b4a1ff1cc34d018e0ef3fc7471f7b23d32ba2eb3cebed2cda340ab96ccbbfe"} Feb 18 19:37:03 crc kubenswrapper[4942]: I0218 19:37:03.137000 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:37:03 crc kubenswrapper[4942]: I0218 19:37:03.163901 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.163873952 podStartE2EDuration="2.163873952s" podCreationTimestamp="2026-02-18 19:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:03.158143432 +0000 UTC m=+1182.863076127" watchObservedRunningTime="2026-02-18 19:37:03.163873952 +0000 UTC m=+1182.868806657" Feb 18 19:37:03 crc kubenswrapper[4942]: I0218 19:37:03.183564 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:37:04 crc kubenswrapper[4942]: I0218 19:37:04.148896 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9"} Feb 18 19:37:04 crc kubenswrapper[4942]: I0218 19:37:04.149140 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:04 crc kubenswrapper[4942]: I0218 19:37:04.188395 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.104659066 podStartE2EDuration="6.188379722s" podCreationTimestamp="2026-02-18 19:36:58 +0000 UTC" firstStartedPulling="2026-02-18 19:36:59.166675927 +0000 UTC m=+1178.871608602" lastFinishedPulling="2026-02-18 19:37:03.250396573 +0000 UTC m=+1182.955329258" observedRunningTime="2026-02-18 19:37:04.183822752 +0000 UTC m=+1183.888755417" watchObservedRunningTime="2026-02-18 19:37:04.188379722 +0000 UTC m=+1183.893312387" Feb 18 19:37:05 crc kubenswrapper[4942]: I0218 19:37:05.159312 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:37:11 crc kubenswrapper[4942]: I0218 19:37:11.529032 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:11 crc kubenswrapper[4942]: I0218 19:37:11.999380 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.000613 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.011981 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.012133 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.022882 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.034748 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.034827 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.034863 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.034907 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.137123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.137346 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.137403 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.137437 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.143684 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.152464 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.160369 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.165392 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.200952 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.202461 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.208453 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.223927 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.240076 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.240128 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.240163 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.240223 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.282686 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.284329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.289143 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.295408 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.296687 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.307162 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.307638 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.324426 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.333115 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.344305 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.344538 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.344882 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.346589 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347145 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347220 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347260 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347467 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.348931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.349271 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.351467 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.355415 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.403796 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.404578 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.452581 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.452639 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.452868 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.453252 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.453308 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.453343 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.453585 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.454399 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.464595 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.471136 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.475713 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.500648 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.482032 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.482055 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.501963 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.503373 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.505525 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.512730 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.514108 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.522098 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.523523 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.553418 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555380 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555420 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555456 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555470 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555504 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555541 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555585 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555604 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555627 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.610013 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.619574 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656227 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656249 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656274 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656340 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656362 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656387 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656404 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656438 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.657364 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.658031 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.661407 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.663244 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.669517 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.669612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.670518 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.688851 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.689654 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.843049 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.860366 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.086669 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.109163 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.110733 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.113319 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.114282 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.138799 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.188821 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.229032 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:13 crc kubenswrapper[4942]: W0218 19:37:13.242794 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dcc14d9_d4a9_41a7_a380_d28ed9d39ef3.slice/crio-c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b WatchSource:0}: Error finding container c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b: Status 404 returned error can't find the container with id c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.255922 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:13 crc kubenswrapper[4942]: W0218 19:37:13.256567 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e00cb35_640c_4e86_8ef4_9c11a4a83768.slice/crio-709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad WatchSource:0}: Error finding container 709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad: Status 404 returned error can't find the container with id 709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.258138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerStarted","Data":"c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b"} Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.259461 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerStarted","Data":"7b2a3f65d372ea4e30eae9c1cb3d4c4737814a71cc9dd040162565d4ea30b91c"} Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.260934 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6wkkj" event={"ID":"b4a19078-b432-452e-8918-7b0f8c60e632","Type":"ContainerStarted","Data":"117e05763b70ed5511fc539676ab66f3e18bab8c00305576c6a5cf642aa0c86e"} Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.273044 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.273147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.273250 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.273404 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.375097 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.375254 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.375294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.375373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.383524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.383776 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.383911 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.403967 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.438987 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.537064 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.549168 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.969622 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.286952 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e00cb35-640c-4e86-8ef4-9c11a4a83768","Type":"ContainerStarted","Data":"709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.299444 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6wkkj" event={"ID":"b4a19078-b432-452e-8918-7b0f8c60e632","Type":"ContainerStarted","Data":"3d586c465df9e16d18d5348d207063c859dc4c0c45589222afa474013bd766c5"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.303436 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" event={"ID":"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2","Type":"ContainerStarted","Data":"2d29442d9649dbaa907e5735ea0dda7657607ca6fa24c4f83c7c2be4ce910a11"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.303494 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" event={"ID":"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2","Type":"ContainerStarted","Data":"bfe2fb8153deab5eb20af7fd17ac70900646805335b4c77b479b2ae83af42455"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.323154 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b369297-3ab8-4077-9af5-68455e6f2fa7","Type":"ContainerStarted","Data":"76e19217db79153ca1c48808a6a49fd7fae4dd51157fee8775824302253c37bb"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.329504 4942 generic.go:334] "Generic (PLEG): container finished" podID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerID="5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646" exitCode=0 Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.329549 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerDied","Data":"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.329574 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerStarted","Data":"ac7c2c212726ec658ded163971fdbf65aa1ee8ef5f331c952d6143e9bfa521d8"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.341956 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6wkkj" podStartSLOduration=3.341936384 podStartE2EDuration="3.341936384s" podCreationTimestamp="2026-02-18 19:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:14.338384481 +0000 UTC m=+1194.043317156" watchObservedRunningTime="2026-02-18 19:37:14.341936384 +0000 UTC m=+1194.046869049" Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.371633 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" podStartSLOduration=1.371611702 podStartE2EDuration="1.371611702s" podCreationTimestamp="2026-02-18 19:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:14.358258402 +0000 UTC m=+1194.063191067" watchObservedRunningTime="2026-02-18 19:37:14.371611702 +0000 UTC m=+1194.076544367" Feb 18 19:37:15 crc kubenswrapper[4942]: I0218 19:37:15.343219 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerStarted","Data":"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c"} Feb 18 19:37:15 crc kubenswrapper[4942]: I0218 19:37:15.370362 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" podStartSLOduration=3.3703448959999998 podStartE2EDuration="3.370344896s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:15.366650679 +0000 UTC m=+1195.071583344" watchObservedRunningTime="2026-02-18 19:37:15.370344896 +0000 UTC m=+1195.075277561" Feb 18 19:37:16 crc kubenswrapper[4942]: I0218 19:37:16.112904 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:16 crc kubenswrapper[4942]: I0218 19:37:16.126644 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:16 crc kubenswrapper[4942]: I0218 19:37:16.352187 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.361557 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b369297-3ab8-4077-9af5-68455e6f2fa7","Type":"ContainerStarted","Data":"03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.363487 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerStarted","Data":"caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.363511 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerStarted","Data":"0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.365403 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e00cb35-640c-4e86-8ef4-9c11a4a83768","Type":"ContainerStarted","Data":"4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.365469 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff" gracePeriod=30 Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.367037 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerStarted","Data":"5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.367068 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerStarted","Data":"a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.367154 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-log" containerID="cri-o://a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31" gracePeriod=30 Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.367209 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-metadata" containerID="cri-o://5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752" gracePeriod=30 Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.384913 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.16112209 podStartE2EDuration="5.384899332s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="2026-02-18 19:37:13.5715027 +0000 UTC m=+1193.276435365" lastFinishedPulling="2026-02-18 19:37:16.795279942 +0000 UTC m=+1196.500212607" observedRunningTime="2026-02-18 19:37:17.381347759 +0000 UTC m=+1197.086280424" watchObservedRunningTime="2026-02-18 19:37:17.384899332 +0000 UTC m=+1197.089831997" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.411103 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.857560045 podStartE2EDuration="5.411083749s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="2026-02-18 19:37:13.245483306 +0000 UTC m=+1192.950415971" lastFinishedPulling="2026-02-18 19:37:16.79900702 +0000 UTC m=+1196.503939675" observedRunningTime="2026-02-18 19:37:17.407811804 +0000 UTC m=+1197.112744469" watchObservedRunningTime="2026-02-18 19:37:17.411083749 +0000 UTC m=+1197.116016414" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.435939 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.81890136 podStartE2EDuration="5.435919981s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="2026-02-18 19:37:13.179117994 +0000 UTC m=+1192.884050659" lastFinishedPulling="2026-02-18 19:37:16.796136615 +0000 UTC m=+1196.501069280" observedRunningTime="2026-02-18 19:37:17.426931065 +0000 UTC m=+1197.131863740" watchObservedRunningTime="2026-02-18 19:37:17.435919981 +0000 UTC m=+1197.140852666" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.454552 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.919821458 podStartE2EDuration="5.454530439s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="2026-02-18 19:37:13.258647711 +0000 UTC m=+1192.963580376" lastFinishedPulling="2026-02-18 19:37:16.793356662 +0000 UTC m=+1196.498289357" observedRunningTime="2026-02-18 19:37:17.443362936 +0000 UTC m=+1197.148295641" watchObservedRunningTime="2026-02-18 19:37:17.454530439 +0000 UTC m=+1197.159463104" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.611035 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.611093 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.620194 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.861188 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:37:18 crc kubenswrapper[4942]: I0218 19:37:18.377852 4942 generic.go:334] "Generic (PLEG): container finished" podID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerID="a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31" exitCode=143 Feb 18 19:37:18 crc kubenswrapper[4942]: I0218 19:37:18.378640 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerDied","Data":"a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31"} Feb 18 19:37:21 crc kubenswrapper[4942]: I0218 19:37:21.411301 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" event={"ID":"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2","Type":"ContainerDied","Data":"2d29442d9649dbaa907e5735ea0dda7657607ca6fa24c4f83c7c2be4ce910a11"} Feb 18 19:37:21 crc kubenswrapper[4942]: I0218 19:37:21.412361 4942 generic.go:334] "Generic (PLEG): container finished" podID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" containerID="2d29442d9649dbaa907e5735ea0dda7657607ca6fa24c4f83c7c2be4ce910a11" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4942]: I0218 19:37:21.414809 4942 generic.go:334] "Generic (PLEG): container finished" podID="b4a19078-b432-452e-8918-7b0f8c60e632" containerID="3d586c465df9e16d18d5348d207063c859dc4c0c45589222afa474013bd766c5" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4942]: I0218 19:37:21.414844 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6wkkj" event={"ID":"b4a19078-b432-452e-8918-7b0f8c60e632","Type":"ContainerDied","Data":"3d586c465df9e16d18d5348d207063c859dc4c0c45589222afa474013bd766c5"} Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.553939 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.554304 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.844993 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.862299 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.924327 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.943772 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.944034 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="dnsmasq-dns" containerID="cri-o://2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" gracePeriod=10 Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.042392 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.069422 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127151 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") pod \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127249 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") pod \"b4a19078-b432-452e-8918-7b0f8c60e632\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127293 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") pod \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127341 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") pod \"b4a19078-b432-452e-8918-7b0f8c60e632\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127383 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") pod \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127452 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") pod \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127501 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") pod \"b4a19078-b432-452e-8918-7b0f8c60e632\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127528 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") pod \"b4a19078-b432-452e-8918-7b0f8c60e632\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.133668 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97" (OuterVolumeSpecName: "kube-api-access-b6d97") pod "b4a19078-b432-452e-8918-7b0f8c60e632" (UID: "b4a19078-b432-452e-8918-7b0f8c60e632"). InnerVolumeSpecName "kube-api-access-b6d97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.135739 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts" (OuterVolumeSpecName: "scripts") pod "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" (UID: "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.137986 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts" (OuterVolumeSpecName: "scripts") pod "b4a19078-b432-452e-8918-7b0f8c60e632" (UID: "b4a19078-b432-452e-8918-7b0f8c60e632"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.143427 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd" (OuterVolumeSpecName: "kube-api-access-cwvcd") pod "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" (UID: "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2"). InnerVolumeSpecName "kube-api-access-cwvcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.159698 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data" (OuterVolumeSpecName: "config-data") pod "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" (UID: "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.163831 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data" (OuterVolumeSpecName: "config-data") pod "b4a19078-b432-452e-8918-7b0f8c60e632" (UID: "b4a19078-b432-452e-8918-7b0f8c60e632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.179716 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" (UID: "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.200882 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4a19078-b432-452e-8918-7b0f8c60e632" (UID: "b4a19078-b432-452e-8918-7b0f8c60e632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.242969 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243010 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243026 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243038 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243052 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243064 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243075 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243086 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.381750 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438208 4942 generic.go:334] "Generic (PLEG): container finished" podID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerID="2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" exitCode=0 Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438290 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerDied","Data":"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b"} Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438325 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerDied","Data":"e8c28553c41794bb048bb9a7187c1a1ab7f1585b41b9526ea5b0ab594f5efa4f"} Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438347 4942 scope.go:117] "RemoveContainer" containerID="2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438505 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.445189 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" event={"ID":"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2","Type":"ContainerDied","Data":"bfe2fb8153deab5eb20af7fd17ac70900646805335b4c77b479b2ae83af42455"} Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.445239 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe2fb8153deab5eb20af7fd17ac70900646805335b4c77b479b2ae83af42455" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.445246 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448411 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448494 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448554 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448651 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448696 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448757 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.453665 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.453742 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6wkkj" event={"ID":"b4a19078-b432-452e-8918-7b0f8c60e632","Type":"ContainerDied","Data":"117e05763b70ed5511fc539676ab66f3e18bab8c00305576c6a5cf642aa0c86e"} Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.453830 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117e05763b70ed5511fc539676ab66f3e18bab8c00305576c6a5cf642aa0c86e" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.458015 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v" (OuterVolumeSpecName: "kube-api-access-8tb7v") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "kube-api-access-8tb7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.471459 4942 scope.go:117] "RemoveContainer" containerID="dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.524365 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.544500 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.569447 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config" (OuterVolumeSpecName: "config") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.570653 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.570906 4942 scope.go:117] "RemoveContainer" containerID="2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" Feb 18 19:37:23 crc kubenswrapper[4942]: W0218 19:37:23.571172 4942 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0/volumes/kubernetes.io~configmap/config Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571193 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571199 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config" (OuterVolumeSpecName: "config") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.571482 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b\": container with ID starting with 2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b not found: ID does not exist" containerID="2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571519 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b"} err="failed to get container status \"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b\": rpc error: code = NotFound desc = could not find container \"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b\": container with ID starting with 2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b not found: ID does not exist" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571544 4942 scope.go:117] "RemoveContainer" containerID="dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.571948 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd\": container with ID starting with dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd not found: ID does not exist" containerID="dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571979 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd"} err="failed to get container status \"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd\": rpc error: code = NotFound desc = could not find container \"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd\": container with ID starting with dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd not found: ID does not exist" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.572698 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.572789 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.572851 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.572917 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.596562 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.604189 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.606870 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" containerName="nova-cell1-conductor-db-sync" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.606894 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" containerName="nova-cell1-conductor-db-sync" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.606920 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="dnsmasq-dns" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.606928 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="dnsmasq-dns" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.606955 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a19078-b432-452e-8918-7b0f8c60e632" containerName="nova-manage" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.606965 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a19078-b432-452e-8918-7b0f8c60e632" containerName="nova-manage" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.606985 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="init" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.606991 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="init" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.607323 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a19078-b432-452e-8918-7b0f8c60e632" containerName="nova-manage" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.607366 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" containerName="nova-cell1-conductor-db-sync" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.607383 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="dnsmasq-dns" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.609248 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.611213 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.621201 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.631013 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.653029 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.653047 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.675413 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.675832 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtd92\" (UniqueName: \"kubernetes.io/projected/5a93f06c-139b-4052-9519-bbd4476a9dab-kube-api-access-mtd92\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.676947 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.677228 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.677247 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.704539 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.704828 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" containerID="cri-o://0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055" gracePeriod=30 Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.704895 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" containerID="cri-o://caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf" gracePeriod=30 Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.740699 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.740781 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.771114 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.778946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.779042 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtd92\" (UniqueName: \"kubernetes.io/projected/5a93f06c-139b-4052-9519-bbd4476a9dab-kube-api-access-mtd92\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.779403 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.783394 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.788396 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.793684 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtd92\" (UniqueName: \"kubernetes.io/projected/5a93f06c-139b-4052-9519-bbd4476a9dab-kube-api-access-mtd92\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.794244 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.935173 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.014915 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.460662 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.477594 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5a93f06c-139b-4052-9519-bbd4476a9dab","Type":"ContainerStarted","Data":"c9cf565b46959ea33ee5857ca6e67aaf00420a397fed906256718948f3150fdc"} Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.481039 4942 generic.go:334] "Generic (PLEG): container finished" podID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerID="0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055" exitCode=143 Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.482350 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerDied","Data":"0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055"} Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.050109 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" path="/var/lib/kubelet/pods/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0/volumes" Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.492074 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5a93f06c-139b-4052-9519-bbd4476a9dab","Type":"ContainerStarted","Data":"592a1177d47bd23fe639828c67034da3b063910dc835e128f26e637c35dda821"} Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.492457 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.492328 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" containerID="cri-o://03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" gracePeriod=30 Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.513979 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5139573459999998 podStartE2EDuration="2.513957346s" podCreationTimestamp="2026-02-18 19:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:25.507402244 +0000 UTC m=+1205.212334909" watchObservedRunningTime="2026-02-18 19:37:25.513957346 +0000 UTC m=+1205.218890011" Feb 18 19:37:27 crc kubenswrapper[4942]: E0218 19:37:27.865618 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:37:27 crc kubenswrapper[4942]: E0218 19:37:27.869356 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:37:27 crc kubenswrapper[4942]: E0218 19:37:27.872071 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:37:27 crc kubenswrapper[4942]: E0218 19:37:27.872136 4942 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" Feb 18 19:37:28 crc kubenswrapper[4942]: I0218 19:37:28.551166 4942 generic.go:334] "Generic (PLEG): container finished" podID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" exitCode=0 Feb 18 19:37:28 crc kubenswrapper[4942]: I0218 19:37:28.551314 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b369297-3ab8-4077-9af5-68455e6f2fa7","Type":"ContainerDied","Data":"03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856"} Feb 18 19:37:28 crc kubenswrapper[4942]: I0218 19:37:28.741305 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.000718 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.097096 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") pod \"1b369297-3ab8-4077-9af5-68455e6f2fa7\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.097328 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") pod \"1b369297-3ab8-4077-9af5-68455e6f2fa7\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.097410 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") pod \"1b369297-3ab8-4077-9af5-68455e6f2fa7\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.104874 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns" (OuterVolumeSpecName: "kube-api-access-5blns") pod "1b369297-3ab8-4077-9af5-68455e6f2fa7" (UID: "1b369297-3ab8-4077-9af5-68455e6f2fa7"). InnerVolumeSpecName "kube-api-access-5blns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.126051 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b369297-3ab8-4077-9af5-68455e6f2fa7" (UID: "1b369297-3ab8-4077-9af5-68455e6f2fa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.131138 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data" (OuterVolumeSpecName: "config-data") pod "1b369297-3ab8-4077-9af5-68455e6f2fa7" (UID: "1b369297-3ab8-4077-9af5-68455e6f2fa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.199951 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.200000 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.200014 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.565372 4942 generic.go:334] "Generic (PLEG): container finished" podID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerID="caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.565571 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerDied","Data":"caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf"} Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.567144 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b369297-3ab8-4077-9af5-68455e6f2fa7","Type":"ContainerDied","Data":"76e19217db79153ca1c48808a6a49fd7fae4dd51157fee8775824302253c37bb"} Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.567199 4942 scope.go:117] "RemoveContainer" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.567238 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.675644 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.702564 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.709161 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") pod \"f1366c48-2eab-4f52-b946-41b5cd9682a9\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.709241 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") pod \"f1366c48-2eab-4f52-b946-41b5cd9682a9\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.709283 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") pod \"f1366c48-2eab-4f52-b946-41b5cd9682a9\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.709319 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") pod \"f1366c48-2eab-4f52-b946-41b5cd9682a9\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.713739 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs" (OuterVolumeSpecName: "logs") pod "f1366c48-2eab-4f52-b946-41b5cd9682a9" (UID: "f1366c48-2eab-4f52-b946-41b5cd9682a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.726743 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.754413 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95" (OuterVolumeSpecName: "kube-api-access-knn95") pod "f1366c48-2eab-4f52-b946-41b5cd9682a9" (UID: "f1366c48-2eab-4f52-b946-41b5cd9682a9"). InnerVolumeSpecName "kube-api-access-knn95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.760941 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1366c48-2eab-4f52-b946-41b5cd9682a9" (UID: "f1366c48-2eab-4f52-b946-41b5cd9682a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.788856 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:29 crc kubenswrapper[4942]: E0218 19:37:29.789622 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.789633 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" Feb 18 19:37:29 crc kubenswrapper[4942]: E0218 19:37:29.789651 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.789657 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" Feb 18 19:37:29 crc kubenswrapper[4942]: E0218 19:37:29.789687 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.789693 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.790000 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.790026 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.790050 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.790932 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.792881 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.820913 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822473 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822543 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822587 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822691 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822776 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822787 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.825787 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data" (OuterVolumeSpecName: "config-data") pod "f1366c48-2eab-4f52-b946-41b5cd9682a9" (UID: "f1366c48-2eab-4f52-b946-41b5cd9682a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.924853 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.924911 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.924944 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.925034 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.928458 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.928737 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.942349 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.185876 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.585171 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerDied","Data":"7b2a3f65d372ea4e30eae9c1cb3d4c4737814a71cc9dd040162565d4ea30b91c"} Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.585299 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.585382 4942 scope.go:117] "RemoveContainer" containerID="caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.646959 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.650883 4942 scope.go:117] "RemoveContainer" containerID="0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.662807 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.674517 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.685864 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.688474 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.692201 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.696645 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.747124 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.747222 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.747258 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.747320 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.848783 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.848874 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.848933 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.849007 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.849356 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.853386 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.853593 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.870020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.059980 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" path="/var/lib/kubelet/pods/1b369297-3ab8-4077-9af5-68455e6f2fa7/volumes" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.061220 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" path="/var/lib/kubelet/pods/f1366c48-2eab-4f52-b946-41b5cd9682a9/volumes" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.165146 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.595864 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"934ec68d-b7d2-4435-8e54-4984cea15920","Type":"ContainerStarted","Data":"f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712"} Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.595909 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"934ec68d-b7d2-4435-8e54-4984cea15920","Type":"ContainerStarted","Data":"c51803e068a4df40cf491f2ad59ffe56be6273114ad918ad454d9a2712bc7592"} Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.620598 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.620578177 podStartE2EDuration="2.620578177s" podCreationTimestamp="2026-02-18 19:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:31.615715779 +0000 UTC m=+1211.320648454" watchObservedRunningTime="2026-02-18 19:37:31.620578177 +0000 UTC m=+1211.325510832" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.664720 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.607326 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerStarted","Data":"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b"} Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.607631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerStarted","Data":"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2"} Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.607643 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerStarted","Data":"cf0e7a844a7633e58acd2bee9698d5dc7b514eece929972cfe261a5a10983dd7"} Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.631969 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.631946172 podStartE2EDuration="2.631946172s" podCreationTimestamp="2026-02-18 19:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:32.623943112 +0000 UTC m=+1212.328875777" watchObservedRunningTime="2026-02-18 19:37:32.631946172 +0000 UTC m=+1212.336878837" Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.865957 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.866243 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerName="kube-state-metrics" containerID="cri-o://91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" gracePeriod=30 Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.364302 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.425608 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") pod \"a8f1712c-12df-4ca2-81d3-dc649c747868\" (UID: \"a8f1712c-12df-4ca2-81d3-dc649c747868\") " Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.436729 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx" (OuterVolumeSpecName: "kube-api-access-f75rx") pod "a8f1712c-12df-4ca2-81d3-dc649c747868" (UID: "a8f1712c-12df-4ca2-81d3-dc649c747868"). InnerVolumeSpecName "kube-api-access-f75rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.528556 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.616852 4942 generic.go:334] "Generic (PLEG): container finished" podID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerID="91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" exitCode=2 Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.616924 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.616911 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f1712c-12df-4ca2-81d3-dc649c747868","Type":"ContainerDied","Data":"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6"} Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.618009 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f1712c-12df-4ca2-81d3-dc649c747868","Type":"ContainerDied","Data":"60cb4ff34d0b296ea32561c63d6c9eaa0072a589abe5d55659f37a97a3ea461d"} Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.618056 4942 scope.go:117] "RemoveContainer" containerID="91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.642945 4942 scope.go:117] "RemoveContainer" containerID="91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" Feb 18 19:37:33 crc kubenswrapper[4942]: E0218 19:37:33.643350 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6\": container with ID starting with 91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6 not found: ID does not exist" containerID="91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.643408 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6"} err="failed to get container status \"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6\": rpc error: code = NotFound desc = could not find container \"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6\": container with ID starting with 91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6 not found: ID does not exist" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.656902 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.668055 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.685319 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:33 crc kubenswrapper[4942]: E0218 19:37:33.685988 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerName="kube-state-metrics" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.686019 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerName="kube-state-metrics" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.686297 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerName="kube-state-metrics" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.687274 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.690220 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.691326 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.695587 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.833728 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.834068 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.834199 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2gm\" (UniqueName: \"kubernetes.io/projected/304de92f-d344-46e6-86b1-5f132f3698b1-kube-api-access-hb2gm\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.834429 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.936095 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.936658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.936918 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2gm\" (UniqueName: \"kubernetes.io/projected/304de92f-d344-46e6-86b1-5f132f3698b1-kube-api-access-hb2gm\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.937064 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.941229 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.942227 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.942614 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.957587 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2gm\" (UniqueName: \"kubernetes.io/projected/304de92f-d344-46e6-86b1-5f132f3698b1-kube-api-access-hb2gm\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.965371 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.013595 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.490311 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.638535 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"304de92f-d344-46e6-86b1-5f132f3698b1","Type":"ContainerStarted","Data":"6d7b9afd9429a7f188836af6e87520172ae02b0de8e27e6e63d71774f11f89d7"} Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.693631 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.694072 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-central-agent" containerID="cri-o://4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e" gracePeriod=30 Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.694287 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="proxy-httpd" containerID="cri-o://ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9" gracePeriod=30 Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.694513 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-notification-agent" containerID="cri-o://a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55" gracePeriod=30 Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.694572 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="sg-core" containerID="cri-o://611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0" gracePeriod=30 Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.062344 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" path="/var/lib/kubelet/pods/a8f1712c-12df-4ca2-81d3-dc649c747868/volumes" Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.186855 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.652962 4942 generic.go:334] "Generic (PLEG): container finished" podID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerID="ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9" exitCode=0 Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653002 4942 generic.go:334] "Generic (PLEG): container finished" podID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerID="611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0" exitCode=2 Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653012 4942 generic.go:334] "Generic (PLEG): container finished" podID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerID="4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e" exitCode=0 Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653039 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9"} Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0"} Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653105 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e"} Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.654872 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"304de92f-d344-46e6-86b1-5f132f3698b1","Type":"ContainerStarted","Data":"8c239e34c8dd9c41e4e4961eec98a816e5464b403c43822bf1283ca96cd98a62"} Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.655041 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.677239 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.224582656 podStartE2EDuration="2.677221712s" podCreationTimestamp="2026-02-18 19:37:33 +0000 UTC" firstStartedPulling="2026-02-18 19:37:34.491600144 +0000 UTC m=+1214.196532809" lastFinishedPulling="2026-02-18 19:37:34.9442392 +0000 UTC m=+1214.649171865" observedRunningTime="2026-02-18 19:37:35.670593398 +0000 UTC m=+1215.375526083" watchObservedRunningTime="2026-02-18 19:37:35.677221712 +0000 UTC m=+1215.382154377" Feb 18 19:37:36 crc kubenswrapper[4942]: I0218 19:37:36.666830 4942 generic.go:334] "Generic (PLEG): container finished" podID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerID="a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55" exitCode=0 Feb 18 19:37:36 crc kubenswrapper[4942]: I0218 19:37:36.666898 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55"} Feb 18 19:37:36 crc kubenswrapper[4942]: I0218 19:37:36.976123 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100484 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100581 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100678 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100860 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100929 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101030 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101056 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101486 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101891 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101876 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.106887 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2" (OuterVolumeSpecName: "kube-api-access-v2dh2") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "kube-api-access-v2dh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.115703 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts" (OuterVolumeSpecName: "scripts") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.133734 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.195704 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203374 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203413 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203428 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203446 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203462 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.212011 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data" (OuterVolumeSpecName: "config-data") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.305301 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.676686 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932"} Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.676731 4942 scope.go:117] "RemoveContainer" containerID="ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.676859 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.710322 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.717127 4942 scope.go:117] "RemoveContainer" containerID="611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.717529 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.734745 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:37 crc kubenswrapper[4942]: E0218 19:37:37.735164 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-notification-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735184 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-notification-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: E0218 19:37:37.735209 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-central-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735216 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-central-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: E0218 19:37:37.735236 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="sg-core" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735243 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="sg-core" Feb 18 19:37:37 crc kubenswrapper[4942]: E0218 19:37:37.735254 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="proxy-httpd" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735259 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="proxy-httpd" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735419 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-central-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735433 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="sg-core" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735449 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-notification-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735462 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="proxy-httpd" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.737032 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.741137 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.742568 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.742806 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.747134 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.751192 4942 scope.go:117] "RemoveContainer" containerID="a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814533 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814569 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814608 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814651 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814687 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814707 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814732 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.815732 4942 scope.go:117] "RemoveContainer" containerID="4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916334 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916400 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916440 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916495 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916830 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.917249 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.917286 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.917202 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.917543 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.921274 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.921969 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.922019 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.924520 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.925835 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.935529 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:38 crc kubenswrapper[4942]: I0218 19:37:38.111928 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:38 crc kubenswrapper[4942]: I0218 19:37:38.599001 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:38 crc kubenswrapper[4942]: I0218 19:37:38.685567 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"4e628a6fe7bba13d144b29335e6e3f96fe39a4ea90cabd697733b132cffcd80d"} Feb 18 19:37:39 crc kubenswrapper[4942]: I0218 19:37:39.046267 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" path="/var/lib/kubelet/pods/4981b67f-ebf1-4d2e-a717-67edbc242474/volumes" Feb 18 19:37:39 crc kubenswrapper[4942]: I0218 19:37:39.709809 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404"} Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.187068 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.224367 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.722538 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92"} Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.722886 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91"} Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.764555 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:37:41 crc kubenswrapper[4942]: I0218 19:37:41.166184 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:37:41 crc kubenswrapper[4942]: I0218 19:37:41.166530 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.166083 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.207023 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.744835 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95"} Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.745074 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.775305 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.2465024590000002 podStartE2EDuration="5.775277545s" podCreationTimestamp="2026-02-18 19:37:37 +0000 UTC" firstStartedPulling="2026-02-18 19:37:38.614408115 +0000 UTC m=+1218.319340800" lastFinishedPulling="2026-02-18 19:37:42.143183221 +0000 UTC m=+1221.848115886" observedRunningTime="2026-02-18 19:37:42.760667342 +0000 UTC m=+1222.465600007" watchObservedRunningTime="2026-02-18 19:37:42.775277545 +0000 UTC m=+1222.480210240" Feb 18 19:37:44 crc kubenswrapper[4942]: I0218 19:37:44.024427 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:37:47 crc kubenswrapper[4942]: E0218 19:37:47.428297 4942 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4981b67f_ebf1_4d2e_a717_67edbc242474.slice/crio-30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932: Error finding container 30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932: Status 404 returned error can't find the container with id 30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932 Feb 18 19:37:47 crc kubenswrapper[4942]: E0218 19:37:47.707610 4942 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e00cb35_640c_4e86_8ef4_9c11a4a83768.slice/crio-conmon-4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.801099 4942 generic.go:334] "Generic (PLEG): container finished" podID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerID="5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752" exitCode=137 Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.801140 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerDied","Data":"5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752"} Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.801203 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerDied","Data":"c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b"} Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.801217 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.802504 4942 generic.go:334] "Generic (PLEG): container finished" podID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerID="4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff" exitCode=137 Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.802542 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e00cb35-640c-4e86-8ef4-9c11a4a83768","Type":"ContainerDied","Data":"4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff"} Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.802562 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e00cb35-640c-4e86-8ef4-9c11a4a83768","Type":"ContainerDied","Data":"709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad"} Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.802574 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.840717 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.847104 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929087 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") pod \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929137 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") pod \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929262 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") pod \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929357 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") pod \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929379 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") pod \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929472 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") pod \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929740 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") pod \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929584 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs" (OuterVolumeSpecName: "logs") pod "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" (UID: "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.930287 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.934248 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n" (OuterVolumeSpecName: "kube-api-access-hq77n") pod "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" (UID: "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3"). InnerVolumeSpecName "kube-api-access-hq77n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.934965 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg" (OuterVolumeSpecName: "kube-api-access-mlgsg") pod "4e00cb35-640c-4e86-8ef4-9c11a4a83768" (UID: "4e00cb35-640c-4e86-8ef4-9c11a4a83768"). InnerVolumeSpecName "kube-api-access-mlgsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.962604 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" (UID: "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.962937 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data" (OuterVolumeSpecName: "config-data") pod "4e00cb35-640c-4e86-8ef4-9c11a4a83768" (UID: "4e00cb35-640c-4e86-8ef4-9c11a4a83768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.964864 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e00cb35-640c-4e86-8ef4-9c11a4a83768" (UID: "4e00cb35-640c-4e86-8ef4-9c11a4a83768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.984196 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data" (OuterVolumeSpecName: "config-data") pod "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" (UID: "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031799 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031832 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031841 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031849 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031858 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031867 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.813813 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.813916 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.866865 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.876819 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.886685 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.922437 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.938517 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: E0218 19:37:48.939166 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-metadata" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939200 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-metadata" Feb 18 19:37:48 crc kubenswrapper[4942]: E0218 19:37:48.939221 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-log" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939227 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-log" Feb 18 19:37:48 crc kubenswrapper[4942]: E0218 19:37:48.939270 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939277 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939446 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939470 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-log" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939480 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-metadata" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.940895 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.954573 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.954720 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.954838 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.955886 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.958003 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.962300 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.963418 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.973154 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.988802 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.045414 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" path="/var/lib/kubelet/pods/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3/volumes" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.045987 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" path="/var/lib/kubelet/pods/4e00cb35-640c-4e86-8ef4-9c11a4a83768/volumes" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055745 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055841 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055882 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055903 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055974 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.056028 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.056092 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.056183 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.056240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjq7x\" (UniqueName: \"kubernetes.io/projected/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-kube-api-access-mjq7x\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.157786 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.157890 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.158640 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.158865 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.158894 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159307 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjq7x\" (UniqueName: \"kubernetes.io/projected/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-kube-api-access-mjq7x\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159366 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159429 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159459 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159479 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159499 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.165520 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.165803 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.165921 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.167222 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.170247 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.172846 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.174373 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.179506 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.193864 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjq7x\" (UniqueName: \"kubernetes.io/projected/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-kube-api-access-mjq7x\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.278079 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.294813 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.801401 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:49 crc kubenswrapper[4942]: W0218 19:37:49.806436 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc401dd00_c0a5_41c1_98ea_873e0e2ce7bf.slice/crio-7a3176d451d4f22629bbea51316e8da6dc1028f231b91417b79ebe376f8398f7 WatchSource:0}: Error finding container 7a3176d451d4f22629bbea51316e8da6dc1028f231b91417b79ebe376f8398f7: Status 404 returned error can't find the container with id 7a3176d451d4f22629bbea51316e8da6dc1028f231b91417b79ebe376f8398f7 Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.823865 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf","Type":"ContainerStarted","Data":"7a3176d451d4f22629bbea51316e8da6dc1028f231b91417b79ebe376f8398f7"} Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.897196 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:49 crc kubenswrapper[4942]: W0218 19:37:49.898044 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f2c79fe_40ed_4218_9db5_ecf2750cd43c.slice/crio-3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778 WatchSource:0}: Error finding container 3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778: Status 404 returned error can't find the container with id 3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778 Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.836128 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf","Type":"ContainerStarted","Data":"26d23b465934beeb44398ef9a7091b49a63a4c39f0979d582e519cc2943d3297"} Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.838305 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerStarted","Data":"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6"} Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.838357 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerStarted","Data":"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf"} Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.838372 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerStarted","Data":"3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778"} Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.870093 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.870071579 podStartE2EDuration="2.870071579s" podCreationTimestamp="2026-02-18 19:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:50.858031763 +0000 UTC m=+1230.562964428" watchObservedRunningTime="2026-02-18 19:37:50.870071579 +0000 UTC m=+1230.575004244" Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.882980 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.882954967 podStartE2EDuration="2.882954967s" podCreationTimestamp="2026-02-18 19:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:50.880544064 +0000 UTC m=+1230.585476719" watchObservedRunningTime="2026-02-18 19:37:50.882954967 +0000 UTC m=+1230.587887642" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.170306 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.170912 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.171736 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.176789 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.848811 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.853452 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.037492 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.039024 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.057744 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147399 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147454 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147473 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147818 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147908 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.148001 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.249720 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.249825 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.249867 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.250019 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.250054 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.250102 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.251187 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.252574 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.252613 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.252723 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.252808 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.274538 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.361126 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:52.876648 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717279 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717838 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-central-agent" containerID="cri-o://122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" gracePeriod=30 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717920 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" containerID="cri-o://9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" gracePeriod=30 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717956 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="sg-core" containerID="cri-o://185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" gracePeriod=30 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717969 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-notification-agent" containerID="cri-o://9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" gracePeriod=30 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.736433 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.741188 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.741251 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.866984 4942 generic.go:334] "Generic (PLEG): container finished" podID="7097c36f-c705-4a21-be80-ea057d24ace8" containerID="81cc4bd58d4674e6299bf3f92627b59ac247ba15bf8a7017013a911bae4a12c5" exitCode=0 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.867104 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerDied","Data":"81cc4bd58d4674e6299bf3f92627b59ac247ba15bf8a7017013a911bae4a12c5"} Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.867136 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerStarted","Data":"5c67289996fab91f1e19ef4b863aed3cd05ec958251ed161ac176da9f1432384"} Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.884207 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3ce1800-8544-49d6-84a8-f635038f26da" containerID="185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" exitCode=2 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.884276 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92"} Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.279234 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.294955 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.295009 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.618151 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.895648 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerStarted","Data":"59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36"} Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.895815 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.898801 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3ce1800-8544-49d6-84a8-f635038f26da" containerID="9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" exitCode=0 Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.898848 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3ce1800-8544-49d6-84a8-f635038f26da" containerID="122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" exitCode=0 Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.898872 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95"} Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.898915 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404"} Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.899036 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" containerID="cri-o://7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" gracePeriod=30 Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.899083 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" containerID="cri-o://76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" gracePeriod=30 Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.923677 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" podStartSLOduration=2.923658294 podStartE2EDuration="2.923658294s" podCreationTimestamp="2026-02-18 19:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:54.914281348 +0000 UTC m=+1234.619214013" watchObservedRunningTime="2026-02-18 19:37:54.923658294 +0000 UTC m=+1234.628590959" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.791836 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833262 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833340 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833417 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833443 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833492 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833522 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833588 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833630 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.834614 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.835032 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.842593 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69" (OuterVolumeSpecName: "kube-api-access-d2f69") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "kube-api-access-d2f69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.845940 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts" (OuterVolumeSpecName: "scripts") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.876151 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.910613 4942 generic.go:334] "Generic (PLEG): container finished" podID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerID="7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" exitCode=143 Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.910686 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerDied","Data":"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2"} Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.913941 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3ce1800-8544-49d6-84a8-f635038f26da" containerID="9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" exitCode=0 Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.914962 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.915509 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91"} Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.915590 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"4e628a6fe7bba13d144b29335e6e3f96fe39a4ea90cabd697733b132cffcd80d"} Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.915651 4942 scope.go:117] "RemoveContainer" containerID="9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939187 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939388 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939454 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939515 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939577 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.946408 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.948617 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.964899 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data" (OuterVolumeSpecName: "config-data") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.992890 4942 scope.go:117] "RemoveContainer" containerID="185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.012714 4942 scope.go:117] "RemoveContainer" containerID="9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.032043 4942 scope.go:117] "RemoveContainer" containerID="122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.042984 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.043010 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.043019 4942 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.049597 4942 scope.go:117] "RemoveContainer" containerID="9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.051927 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95\": container with ID starting with 9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95 not found: ID does not exist" containerID="9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.051973 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95"} err="failed to get container status \"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95\": rpc error: code = NotFound desc = could not find container \"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95\": container with ID starting with 9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95 not found: ID does not exist" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.052019 4942 scope.go:117] "RemoveContainer" containerID="185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.058086 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92\": container with ID starting with 185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92 not found: ID does not exist" containerID="185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058127 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92"} err="failed to get container status \"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92\": rpc error: code = NotFound desc = could not find container \"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92\": container with ID starting with 185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92 not found: ID does not exist" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058152 4942 scope.go:117] "RemoveContainer" containerID="9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.058476 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91\": container with ID starting with 9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91 not found: ID does not exist" containerID="9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058501 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91"} err="failed to get container status \"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91\": rpc error: code = NotFound desc = could not find container \"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91\": container with ID starting with 9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91 not found: ID does not exist" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058517 4942 scope.go:117] "RemoveContainer" containerID="122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.058811 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404\": container with ID starting with 122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404 not found: ID does not exist" containerID="122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058847 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404"} err="failed to get container status \"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404\": rpc error: code = NotFound desc = could not find container \"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404\": container with ID starting with 122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404 not found: ID does not exist" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.252472 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.261144 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.273895 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.274429 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="sg-core" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.274492 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="sg-core" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.274596 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.274643 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.274697 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-central-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.274741 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-central-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.274811 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-notification-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275032 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-notification-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275365 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-central-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275431 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-notification-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275492 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275542 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="sg-core" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.277397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.286564 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.286962 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.287820 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.300469 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348503 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-log-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348617 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-config-data\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348673 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6wdq\" (UniqueName: \"kubernetes.io/projected/c330a0f3-afd7-4b55-8d33-8617b38bba91-kube-api-access-h6wdq\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348693 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348711 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-run-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348779 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348798 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-scripts\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.450782 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-run-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451288 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451358 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451445 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-scripts\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451562 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-log-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451687 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-config-data\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.452668 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6wdq\" (UniqueName: \"kubernetes.io/projected/c330a0f3-afd7-4b55-8d33-8617b38bba91-kube-api-access-h6wdq\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.453047 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.452213 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-run-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.452303 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-log-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.455576 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.455922 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-config-data\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.456288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.456660 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.458445 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-scripts\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.471404 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6wdq\" (UniqueName: \"kubernetes.io/projected/c330a0f3-afd7-4b55-8d33-8617b38bba91-kube-api-access-h6wdq\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.606431 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:57 crc kubenswrapper[4942]: I0218 19:37:57.051127 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" path="/var/lib/kubelet/pods/b3ce1800-8544-49d6-84a8-f635038f26da/volumes" Feb 18 19:37:57 crc kubenswrapper[4942]: I0218 19:37:57.108425 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:57 crc kubenswrapper[4942]: I0218 19:37:57.932978 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3"} Feb 18 19:37:57 crc kubenswrapper[4942]: I0218 19:37:57.933326 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"4f3eeeb1d2a0ef0c1322e2cefb10443472f8be4c64f7fa8d9722e28c555476bb"} Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.460964 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.626382 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") pod \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.626659 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") pod \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.626680 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") pod \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.626814 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") pod \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.627041 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs" (OuterVolumeSpecName: "logs") pod "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" (UID: "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.627542 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.644477 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72" (OuterVolumeSpecName: "kube-api-access-hbc72") pod "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" (UID: "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a"). InnerVolumeSpecName "kube-api-access-hbc72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.661562 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data" (OuterVolumeSpecName: "config-data") pod "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" (UID: "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.661899 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" (UID: "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.730073 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.730121 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.730138 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944430 4942 generic.go:334] "Generic (PLEG): container finished" podID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerID="76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" exitCode=0 Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944492 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944487 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerDied","Data":"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b"} Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944666 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerDied","Data":"cf0e7a844a7633e58acd2bee9698d5dc7b514eece929972cfe261a5a10983dd7"} Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944703 4942 scope.go:117] "RemoveContainer" containerID="76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.956830 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60"} Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.976284 4942 scope.go:117] "RemoveContainer" containerID="7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.006449 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.028283 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.062049 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" path="/var/lib/kubelet/pods/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a/volumes" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.063133 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: E0218 19:37:59.063713 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.063790 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" Feb 18 19:37:59 crc kubenswrapper[4942]: E0218 19:37:59.063858 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.063902 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.064112 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.064198 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.065396 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.065543 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.067882 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.067995 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.068811 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.094970 4942 scope.go:117] "RemoveContainer" containerID="76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" Feb 18 19:37:59 crc kubenswrapper[4942]: E0218 19:37:59.095457 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b\": container with ID starting with 76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b not found: ID does not exist" containerID="76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.095486 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b"} err="failed to get container status \"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b\": rpc error: code = NotFound desc = could not find container \"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b\": container with ID starting with 76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b not found: ID does not exist" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.095507 4942 scope.go:117] "RemoveContainer" containerID="7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" Feb 18 19:37:59 crc kubenswrapper[4942]: E0218 19:37:59.095823 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2\": container with ID starting with 7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2 not found: ID does not exist" containerID="7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.095875 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2"} err="failed to get container status \"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2\": rpc error: code = NotFound desc = could not find container \"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2\": container with ID starting with 7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2 not found: ID does not exist" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245005 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245180 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245475 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245544 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.280142 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.295084 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.295422 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.296621 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347555 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347630 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347714 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347822 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.348244 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.352492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.352634 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.353003 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.353546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.366517 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.409584 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.882427 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: W0218 19:37:59.885355 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfa84b55_e3b4_425c_983b_57e60b06ee59.slice/crio-7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970 WatchSource:0}: Error finding container 7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970: Status 404 returned error can't find the container with id 7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970 Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.967457 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerStarted","Data":"7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970"} Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.969826 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"f410fe69fa8e94a16f161a61d09576b4203d2de3fee69dfb84d2e69966092817"} Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.984100 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.163883 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.165225 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.168049 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.168215 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.187869 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.266647 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.266697 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.266790 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.266838 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.306970 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.306989 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.368353 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.368457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.368635 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.369172 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.373344 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.373415 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.375338 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.387291 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.563706 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:01 crc kubenswrapper[4942]: I0218 19:38:01.012321 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerStarted","Data":"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb"} Feb 18 19:38:01 crc kubenswrapper[4942]: I0218 19:38:01.017727 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerStarted","Data":"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2"} Feb 18 19:38:01 crc kubenswrapper[4942]: I0218 19:38:01.076798 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.076774716 podStartE2EDuration="3.076774716s" podCreationTimestamp="2026-02-18 19:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:01.052330244 +0000 UTC m=+1240.757262909" watchObservedRunningTime="2026-02-18 19:38:01.076774716 +0000 UTC m=+1240.781707401" Feb 18 19:38:01 crc kubenswrapper[4942]: I0218 19:38:01.159644 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:38:01 crc kubenswrapper[4942]: W0218 19:38:01.170390 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c972a02_9d35_43d1_9ef6_ab99f7cded50.slice/crio-1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb WatchSource:0}: Error finding container 1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb: Status 404 returned error can't find the container with id 1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.022870 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sjb6" event={"ID":"2c972a02-9d35-43d1-9ef6-ab99f7cded50","Type":"ContainerStarted","Data":"493fbf668fd581eae9f157a3d4dd7cefc935750aeaa50d79a8dc2cadd67f3413"} Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.022917 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sjb6" event={"ID":"2c972a02-9d35-43d1-9ef6-ab99f7cded50","Type":"ContainerStarted","Data":"1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb"} Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.030313 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"d95c7e55f7d0cdb9979c16f83fcc95679308cf40adf688c3329d8aaa4109711b"} Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.030352 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.046252 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6sjb6" podStartSLOduration=2.046231231 podStartE2EDuration="2.046231231s" podCreationTimestamp="2026-02-18 19:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:02.041065645 +0000 UTC m=+1241.745998310" watchObservedRunningTime="2026-02-18 19:38:02.046231231 +0000 UTC m=+1241.751163896" Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.104261 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.157512312 podStartE2EDuration="6.104231303s" podCreationTimestamp="2026-02-18 19:37:56 +0000 UTC" firstStartedPulling="2026-02-18 19:37:57.112615007 +0000 UTC m=+1236.817547692" lastFinishedPulling="2026-02-18 19:38:01.059334018 +0000 UTC m=+1240.764266683" observedRunningTime="2026-02-18 19:38:02.087550195 +0000 UTC m=+1241.792482860" watchObservedRunningTime="2026-02-18 19:38:02.104231303 +0000 UTC m=+1241.809163998" Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.363000 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.449885 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.450248 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" containerID="cri-o://b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" gracePeriod=10 Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.996894 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.037965 4942 generic.go:334] "Generic (PLEG): container finished" podID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerID="b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" exitCode=0 Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.038897 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.039401 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerDied","Data":"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c"} Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.039433 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerDied","Data":"ac7c2c212726ec658ded163971fdbf65aa1ee8ef5f331c952d6143e9bfa521d8"} Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.039451 4942 scope.go:117] "RemoveContainer" containerID="b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.102009 4942 scope.go:117] "RemoveContainer" containerID="5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.123835 4942 scope.go:117] "RemoveContainer" containerID="b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" Feb 18 19:38:03 crc kubenswrapper[4942]: E0218 19:38:03.125231 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c\": container with ID starting with b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c not found: ID does not exist" containerID="b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.125269 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c"} err="failed to get container status \"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c\": rpc error: code = NotFound desc = could not find container \"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c\": container with ID starting with b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c not found: ID does not exist" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.125296 4942 scope.go:117] "RemoveContainer" containerID="5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646" Feb 18 19:38:03 crc kubenswrapper[4942]: E0218 19:38:03.128859 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646\": container with ID starting with 5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646 not found: ID does not exist" containerID="5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.128921 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646"} err="failed to get container status \"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646\": rpc error: code = NotFound desc = could not find container \"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646\": container with ID starting with 5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646 not found: ID does not exist" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171475 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171560 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171679 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171733 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171774 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171819 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.179900 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6" (OuterVolumeSpecName: "kube-api-access-shhc6") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "kube-api-access-shhc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.228381 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.229740 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.231275 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config" (OuterVolumeSpecName: "config") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.246145 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.263510 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275514 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275544 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275556 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275567 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275577 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275585 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.375473 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.388438 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:38:05 crc kubenswrapper[4942]: I0218 19:38:05.049567 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" path="/var/lib/kubelet/pods/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9/volumes" Feb 18 19:38:07 crc kubenswrapper[4942]: I0218 19:38:07.092992 4942 generic.go:334] "Generic (PLEG): container finished" podID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" containerID="493fbf668fd581eae9f157a3d4dd7cefc935750aeaa50d79a8dc2cadd67f3413" exitCode=0 Feb 18 19:38:07 crc kubenswrapper[4942]: I0218 19:38:07.093225 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sjb6" event={"ID":"2c972a02-9d35-43d1-9ef6-ab99f7cded50","Type":"ContainerDied","Data":"493fbf668fd581eae9f157a3d4dd7cefc935750aeaa50d79a8dc2cadd67f3413"} Feb 18 19:38:07 crc kubenswrapper[4942]: I0218 19:38:07.844364 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.208:5353: i/o timeout" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.537814 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.689237 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") pod \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.689608 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") pod \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.689670 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") pod \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.689694 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") pod \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.695890 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r" (OuterVolumeSpecName: "kube-api-access-sm47r") pod "2c972a02-9d35-43d1-9ef6-ab99f7cded50" (UID: "2c972a02-9d35-43d1-9ef6-ab99f7cded50"). InnerVolumeSpecName "kube-api-access-sm47r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.696825 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts" (OuterVolumeSpecName: "scripts") pod "2c972a02-9d35-43d1-9ef6-ab99f7cded50" (UID: "2c972a02-9d35-43d1-9ef6-ab99f7cded50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.725865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data" (OuterVolumeSpecName: "config-data") pod "2c972a02-9d35-43d1-9ef6-ab99f7cded50" (UID: "2c972a02-9d35-43d1-9ef6-ab99f7cded50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.728011 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c972a02-9d35-43d1-9ef6-ab99f7cded50" (UID: "2c972a02-9d35-43d1-9ef6-ab99f7cded50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.792053 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.792119 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.792135 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.792145 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.136149 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.136128 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sjb6" event={"ID":"2c972a02-9d35-43d1-9ef6-ab99f7cded50","Type":"ContainerDied","Data":"1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb"} Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.140042 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.302872 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.303558 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.312852 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.411038 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.411087 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.432376 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.446869 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.447269 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" containerID="cri-o://f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" gracePeriod=30 Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.478088 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.143927 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" containerID="cri-o://57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" gracePeriod=30 Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.144710 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" containerID="cri-o://601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" gracePeriod=30 Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.155152 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": EOF" Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.155846 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": EOF" Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.162647 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:38:10 crc kubenswrapper[4942]: E0218 19:38:10.192421 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:38:10 crc kubenswrapper[4942]: E0218 19:38:10.221627 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:38:10 crc kubenswrapper[4942]: E0218 19:38:10.237616 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:38:10 crc kubenswrapper[4942]: E0218 19:38:10.237698 4942 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" Feb 18 19:38:11 crc kubenswrapper[4942]: I0218 19:38:11.157260 4942 generic.go:334] "Generic (PLEG): container finished" podID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerID="57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" exitCode=143 Feb 18 19:38:11 crc kubenswrapper[4942]: I0218 19:38:11.157847 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" containerID="cri-o://754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" gracePeriod=30 Feb 18 19:38:11 crc kubenswrapper[4942]: I0218 19:38:11.157936 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerDied","Data":"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2"} Feb 18 19:38:11 crc kubenswrapper[4942]: I0218 19:38:11.158329 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" containerID="cri-o://2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" gracePeriod=30 Feb 18 19:38:12 crc kubenswrapper[4942]: I0218 19:38:12.169166 4942 generic.go:334] "Generic (PLEG): container finished" podID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerID="754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" exitCode=143 Feb 18 19:38:12 crc kubenswrapper[4942]: I0218 19:38:12.169220 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerDied","Data":"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf"} Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.193405 4942 generic.go:334] "Generic (PLEG): container finished" podID="934ec68d-b7d2-4435-8e54-4984cea15920" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" exitCode=0 Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.193450 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"934ec68d-b7d2-4435-8e54-4984cea15920","Type":"ContainerDied","Data":"f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712"} Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.301254 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:40718->10.217.0.217:8775: read: connection reset by peer" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.301314 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:40716->10.217.0.217:8775: read: connection reset by peer" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.459646 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.607072 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") pod \"934ec68d-b7d2-4435-8e54-4984cea15920\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.607172 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") pod \"934ec68d-b7d2-4435-8e54-4984cea15920\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.608142 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") pod \"934ec68d-b7d2-4435-8e54-4984cea15920\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.617156 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2" (OuterVolumeSpecName: "kube-api-access-nb4b2") pod "934ec68d-b7d2-4435-8e54-4984cea15920" (UID: "934ec68d-b7d2-4435-8e54-4984cea15920"). InnerVolumeSpecName "kube-api-access-nb4b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.639091 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data" (OuterVolumeSpecName: "config-data") pod "934ec68d-b7d2-4435-8e54-4984cea15920" (UID: "934ec68d-b7d2-4435-8e54-4984cea15920"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.649675 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "934ec68d-b7d2-4435-8e54-4984cea15920" (UID: "934ec68d-b7d2-4435-8e54-4984cea15920"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.710165 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.710324 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.710501 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.762011 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.912976 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.913092 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.913151 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.913260 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.913320 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.917075 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs" (OuterVolumeSpecName: "logs") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.924049 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj" (OuterVolumeSpecName: "kube-api-access-89kwj") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "kube-api-access-89kwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.945185 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data" (OuterVolumeSpecName: "config-data") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.959881 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.970792 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015276 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015313 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015324 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015334 4942 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015342 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203397 4942 generic.go:334] "Generic (PLEG): container finished" podID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerID="2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" exitCode=0 Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203473 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerDied","Data":"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6"} Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203702 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerDied","Data":"3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778"} Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203720 4942 scope.go:117] "RemoveContainer" containerID="2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203503 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.207191 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"934ec68d-b7d2-4435-8e54-4984cea15920","Type":"ContainerDied","Data":"c51803e068a4df40cf491f2ad59ffe56be6273114ad918ad454d9a2712bc7592"} Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.207348 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.231093 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.231344 4942 scope.go:117] "RemoveContainer" containerID="754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.246791 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.252893 4942 scope.go:117] "RemoveContainer" containerID="2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.260491 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6\": container with ID starting with 2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6 not found: ID does not exist" containerID="2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.260549 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6"} err="failed to get container status \"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6\": rpc error: code = NotFound desc = could not find container \"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6\": container with ID starting with 2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6 not found: ID does not exist" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.260579 4942 scope.go:117] "RemoveContainer" containerID="754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.261094 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf\": container with ID starting with 754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf not found: ID does not exist" containerID="754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.261145 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf"} err="failed to get container status \"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf\": rpc error: code = NotFound desc = could not find container \"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf\": container with ID starting with 754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf not found: ID does not exist" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.261177 4942 scope.go:117] "RemoveContainer" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.267804 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.278596 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.293536 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294015 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="init" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294035 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="init" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294048 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" containerName="nova-manage" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294054 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" containerName="nova-manage" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294075 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294082 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294092 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294099 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294114 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294121 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294134 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294140 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294310 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294326 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294334 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294342 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" containerName="nova-manage" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294355 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.295630 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.299444 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.300893 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.300939 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.301109 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.303416 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.320881 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.333422 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437202 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2526c15-03de-4c11-83b4-0bc7689a6b23-logs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437276 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dv2s\" (UniqueName: \"kubernetes.io/projected/59c0f540-3718-4d09-b50f-78677151be71-kube-api-access-4dv2s\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437346 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-config-data\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437400 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-config-data\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437466 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437561 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gk4\" (UniqueName: \"kubernetes.io/projected/c2526c15-03de-4c11-83b4-0bc7689a6b23-kube-api-access-f2gk4\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437612 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539524 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539638 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2gk4\" (UniqueName: \"kubernetes.io/projected/c2526c15-03de-4c11-83b4-0bc7689a6b23-kube-api-access-f2gk4\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539684 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539735 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539791 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2526c15-03de-4c11-83b4-0bc7689a6b23-logs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539844 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dv2s\" (UniqueName: \"kubernetes.io/projected/59c0f540-3718-4d09-b50f-78677151be71-kube-api-access-4dv2s\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539893 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-config-data\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539921 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-config-data\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.540471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2526c15-03de-4c11-83b4-0bc7689a6b23-logs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.544144 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.544639 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.545092 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-config-data\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.547115 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-config-data\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.552531 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.557271 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2gk4\" (UniqueName: \"kubernetes.io/projected/c2526c15-03de-4c11-83b4-0bc7689a6b23-kube-api-access-f2gk4\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.559095 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dv2s\" (UniqueName: \"kubernetes.io/projected/59c0f540-3718-4d09-b50f-78677151be71-kube-api-access-4dv2s\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.628850 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.638330 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.067973 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150149 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150208 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150252 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150311 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150351 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150383 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150832 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs" (OuterVolumeSpecName: "logs") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.151040 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.155926 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb" (OuterVolumeSpecName: "kube-api-access-gdxwb") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "kube-api-access-gdxwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.199152 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.214536 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.223819 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.230375 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233063 4942 generic.go:334] "Generic (PLEG): container finished" podID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerID="601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" exitCode=0 Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233112 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerDied","Data":"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb"} Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233150 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerDied","Data":"7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970"} Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233171 4942 scope.go:117] "RemoveContainer" containerID="601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233173 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233327 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.239415 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data" (OuterVolumeSpecName: "config-data") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253536 4942 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253577 4942 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253590 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253601 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253614 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.370195 4942 scope.go:117] "RemoveContainer" containerID="57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.405926 4942 scope.go:117] "RemoveContainer" containerID="601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" Feb 18 19:38:16 crc kubenswrapper[4942]: E0218 19:38:16.406353 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb\": container with ID starting with 601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb not found: ID does not exist" containerID="601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.406404 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb"} err="failed to get container status \"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb\": rpc error: code = NotFound desc = could not find container \"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb\": container with ID starting with 601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb not found: ID does not exist" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.406432 4942 scope.go:117] "RemoveContainer" containerID="57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" Feb 18 19:38:16 crc kubenswrapper[4942]: E0218 19:38:16.406695 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2\": container with ID starting with 57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2 not found: ID does not exist" containerID="57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.406722 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2"} err="failed to get container status \"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2\": rpc error: code = NotFound desc = could not find container \"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2\": container with ID starting with 57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2 not found: ID does not exist" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.570030 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.586321 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.597880 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: E0218 19:38:16.598423 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.598448 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" Feb 18 19:38:16 crc kubenswrapper[4942]: E0218 19:38:16.598486 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.598496 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.598733 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.598782 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.600112 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.602642 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.603184 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.603346 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.614553 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766020 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-public-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766397 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766456 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k48v4\" (UniqueName: \"kubernetes.io/projected/360fd9f7-8dca-4006-a1c3-24f346ff360e-kube-api-access-k48v4\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766624 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-config-data\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.767062 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360fd9f7-8dca-4006-a1c3-24f346ff360e-logs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.869811 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-public-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.870001 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.870173 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.870263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k48v4\" (UniqueName: \"kubernetes.io/projected/360fd9f7-8dca-4006-a1c3-24f346ff360e-kube-api-access-k48v4\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.871264 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-config-data\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.872546 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360fd9f7-8dca-4006-a1c3-24f346ff360e-logs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.873287 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360fd9f7-8dca-4006-a1c3-24f346ff360e-logs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.874413 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-public-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.878371 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.878839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.888466 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-config-data\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.900494 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k48v4\" (UniqueName: \"kubernetes.io/projected/360fd9f7-8dca-4006-a1c3-24f346ff360e-kube-api-access-k48v4\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.924914 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.054281 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" path="/var/lib/kubelet/pods/8f2c79fe-40ed-4218-9db5-ecf2750cd43c/volumes" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.055667 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" path="/var/lib/kubelet/pods/934ec68d-b7d2-4435-8e54-4984cea15920/volumes" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.057177 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" path="/var/lib/kubelet/pods/dfa84b55-e3b4-425c-983b-57e60b06ee59/volumes" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.251138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59c0f540-3718-4d09-b50f-78677151be71","Type":"ContainerStarted","Data":"6d4cdb9843e3617ce9be9e454605c696cad35ab9d30b6e9edfef9a5d83b07bca"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.251456 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59c0f540-3718-4d09-b50f-78677151be71","Type":"ContainerStarted","Data":"4dd103e04ad3923a12dea9c35490243ed3d6c8414a5f5f99ff9134b6859377d8"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.259125 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2526c15-03de-4c11-83b4-0bc7689a6b23","Type":"ContainerStarted","Data":"61ee539a751442adb567c24374e8edace8fa52e49d3551635c89816653b3f49c"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.259176 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2526c15-03de-4c11-83b4-0bc7689a6b23","Type":"ContainerStarted","Data":"010d3227279137e002f4529203fe4c20c0c08675107e2d39375e6c08913b2504"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.259194 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2526c15-03de-4c11-83b4-0bc7689a6b23","Type":"ContainerStarted","Data":"588b81688a9bc83b0d8097e51e95ace4170b8c4a39d7b111dcf6ff133effe808"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.281749 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.2817266480000002 podStartE2EDuration="2.281726648s" podCreationTimestamp="2026-02-18 19:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:17.277126647 +0000 UTC m=+1256.982059352" watchObservedRunningTime="2026-02-18 19:38:17.281726648 +0000 UTC m=+1256.986659323" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.302274 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.302257037 podStartE2EDuration="2.302257037s" podCreationTimestamp="2026-02-18 19:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:17.29592455 +0000 UTC m=+1257.000857245" watchObservedRunningTime="2026-02-18 19:38:17.302257037 +0000 UTC m=+1257.007189702" Feb 18 19:38:17 crc kubenswrapper[4942]: W0218 19:38:17.359732 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360fd9f7_8dca_4006_a1c3_24f346ff360e.slice/crio-5c9dbd8559f984496e50528f94e0f78c9e071b45a58e2bbdcb39d25e73d24dec WatchSource:0}: Error finding container 5c9dbd8559f984496e50528f94e0f78c9e071b45a58e2bbdcb39d25e73d24dec: Status 404 returned error can't find the container with id 5c9dbd8559f984496e50528f94e0f78c9e071b45a58e2bbdcb39d25e73d24dec Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.361538 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:18 crc kubenswrapper[4942]: I0218 19:38:18.272108 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"360fd9f7-8dca-4006-a1c3-24f346ff360e","Type":"ContainerStarted","Data":"308899099f016f5b8c01d40fcf1e7ef2a91302b3cf849c954e3d065ef43b744e"} Feb 18 19:38:18 crc kubenswrapper[4942]: I0218 19:38:18.272487 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"360fd9f7-8dca-4006-a1c3-24f346ff360e","Type":"ContainerStarted","Data":"81fafda8d71c1fd08cee9c90f4298976a4dafac63ba06e7349f115fa0233d3c2"} Feb 18 19:38:18 crc kubenswrapper[4942]: I0218 19:38:18.272513 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"360fd9f7-8dca-4006-a1c3-24f346ff360e","Type":"ContainerStarted","Data":"5c9dbd8559f984496e50528f94e0f78c9e071b45a58e2bbdcb39d25e73d24dec"} Feb 18 19:38:18 crc kubenswrapper[4942]: I0218 19:38:18.305487 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.305466508 podStartE2EDuration="2.305466508s" podCreationTimestamp="2026-02-18 19:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:18.29905787 +0000 UTC m=+1258.003990585" watchObservedRunningTime="2026-02-18 19:38:18.305466508 +0000 UTC m=+1258.010399173" Feb 18 19:38:20 crc kubenswrapper[4942]: I0218 19:38:20.629416 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:38:20 crc kubenswrapper[4942]: I0218 19:38:20.629660 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:38:20 crc kubenswrapper[4942]: I0218 19:38:20.638902 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.741409 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.742347 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.742426 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.743396 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.743502 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66" gracePeriod=600 Feb 18 19:38:24 crc kubenswrapper[4942]: I0218 19:38:24.345553 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66" exitCode=0 Feb 18 19:38:24 crc kubenswrapper[4942]: I0218 19:38:24.345628 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66"} Feb 18 19:38:24 crc kubenswrapper[4942]: I0218 19:38:24.345997 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf"} Feb 18 19:38:24 crc kubenswrapper[4942]: I0218 19:38:24.346025 4942 scope.go:117] "RemoveContainer" containerID="4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b" Feb 18 19:38:25 crc kubenswrapper[4942]: I0218 19:38:25.630130 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:38:25 crc kubenswrapper[4942]: I0218 19:38:25.632212 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:38:25 crc kubenswrapper[4942]: I0218 19:38:25.639484 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:38:25 crc kubenswrapper[4942]: I0218 19:38:25.692182 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.409949 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.620713 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.660629 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c2526c15-03de-4c11-83b4-0bc7689a6b23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.661050 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c2526c15-03de-4c11-83b4-0bc7689a6b23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.925434 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.925526 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:38:27 crc kubenswrapper[4942]: I0218 19:38:27.938104 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="360fd9f7-8dca-4006-a1c3-24f346ff360e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:27 crc kubenswrapper[4942]: I0218 19:38:27.938435 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="360fd9f7-8dca-4006-a1c3-24f346ff360e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:35 crc kubenswrapper[4942]: I0218 19:38:35.639492 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:38:35 crc kubenswrapper[4942]: I0218 19:38:35.648748 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:38:35 crc kubenswrapper[4942]: I0218 19:38:35.652657 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.482962 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.933938 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.935378 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.938873 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.942664 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:38:37 crc kubenswrapper[4942]: I0218 19:38:37.492990 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:38:37 crc kubenswrapper[4942]: I0218 19:38:37.506072 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:38:45 crc kubenswrapper[4942]: I0218 19:38:45.229128 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:46 crc kubenswrapper[4942]: I0218 19:38:46.227925 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:49 crc kubenswrapper[4942]: I0218 19:38:49.281836 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="rabbitmq" containerID="cri-o://2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7" gracePeriod=604796 Feb 18 19:38:50 crc kubenswrapper[4942]: I0218 19:38:50.856197 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" containerID="cri-o://4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" gracePeriod=604796 Feb 18 19:38:55 crc kubenswrapper[4942]: I0218 19:38:55.677887 4942 generic.go:334] "Generic (PLEG): container finished" podID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerID="2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7" exitCode=0 Feb 18 19:38:55 crc kubenswrapper[4942]: I0218 19:38:55.678242 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerDied","Data":"2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7"} Feb 18 19:38:55 crc kubenswrapper[4942]: I0218 19:38:55.965134 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.097784 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.097825 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.097932 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.097982 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098018 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098091 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098115 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098145 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098166 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098194 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098217 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.101038 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.103405 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.104452 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.105192 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.109294 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.111271 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.113910 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf" (OuterVolumeSpecName: "kube-api-access-8wqkf") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "kube-api-access-8wqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.115872 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info" (OuterVolumeSpecName: "pod-info") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.132123 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data" (OuterVolumeSpecName: "config-data") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.187740 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf" (OuterVolumeSpecName: "server-conf") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210594 4942 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210625 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210645 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210658 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210669 4942 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210677 4942 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210686 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210695 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210703 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210712 4942 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.241988 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.261658 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.312975 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.313208 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.688772 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerDied","Data":"f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf"} Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.688827 4942 scope.go:117] "RemoveContainer" containerID="2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.688965 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.720629 4942 scope.go:117] "RemoveContainer" containerID="e242de7f4af5755759f500d3c9dbc2395ec18d3bfe3fe38cf008cae5b5314de3" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.728859 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.737938 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.766441 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:56 crc kubenswrapper[4942]: E0218 19:38:56.766935 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="rabbitmq" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.766954 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="rabbitmq" Feb 18 19:38:56 crc kubenswrapper[4942]: E0218 19:38:56.766978 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="setup-container" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.766986 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="setup-container" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.767223 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="rabbitmq" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.768496 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.770518 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.770799 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.771052 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.771164 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.771220 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.772675 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jnzzx" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.772717 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.796349 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841220 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-config-data\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841296 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841352 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42559616-368c-4628-8d82-75bfc94dcbaf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841555 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841661 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841872 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42559616-368c-4628-8d82-75bfc94dcbaf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.842180 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65dcm\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-kube-api-access-65dcm\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.842277 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.842327 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.842630 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.843738 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.945786 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42559616-368c-4628-8d82-75bfc94dcbaf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946081 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65dcm\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-kube-api-access-65dcm\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946112 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946134 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946209 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946252 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946274 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-config-data\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946295 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946315 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42559616-368c-4628-8d82-75bfc94dcbaf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946338 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.947546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.948104 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.948590 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-config-data\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.951152 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.952283 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.952405 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.968648 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.969409 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42559616-368c-4628-8d82-75bfc94dcbaf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.975578 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.980268 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42559616-368c-4628-8d82-75bfc94dcbaf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.997789 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65dcm\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-kube-api-access-65dcm\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.032000 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.047676 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" path="/var/lib/kubelet/pods/77de5cb0-e446-407d-9e32-b13f39c84ae2/volumes" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.161629 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.430415 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566118 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566229 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566328 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566404 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566429 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566452 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566478 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566549 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566621 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566642 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566697 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566995 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.567712 4942 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.568126 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.574551 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.574606 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info" (OuterVolumeSpecName: "pod-info") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.574954 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.582327 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz" (OuterVolumeSpecName: "kube-api-access-p9vpz") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "kube-api-access-p9vpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.588821 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.600027 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.615256 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data" (OuterVolumeSpecName: "config-data") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673210 4942 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673241 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673252 4942 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673261 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673272 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673292 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673300 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673311 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.696365 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf" (OuterVolumeSpecName: "server-conf") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.710369 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.736877 4942 generic.go:334] "Generic (PLEG): container finished" podID="b6b41292-c562-4964-bb25-d8945415b3da" containerID="4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" exitCode=0 Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.737163 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerDied","Data":"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94"} Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.737206 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerDied","Data":"dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3"} Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.737224 4942 scope.go:117] "RemoveContainer" containerID="4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.737333 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.746698 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.775084 4942 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.775109 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.780036 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.802524 4942 scope.go:117] "RemoveContainer" containerID="c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.839320 4942 scope.go:117] "RemoveContainer" containerID="4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" Feb 18 19:38:57 crc kubenswrapper[4942]: E0218 19:38:57.839873 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94\": container with ID starting with 4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94 not found: ID does not exist" containerID="4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.839923 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94"} err="failed to get container status \"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94\": rpc error: code = NotFound desc = could not find container \"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94\": container with ID starting with 4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94 not found: ID does not exist" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.839951 4942 scope.go:117] "RemoveContainer" containerID="c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782" Feb 18 19:38:57 crc kubenswrapper[4942]: E0218 19:38:57.840420 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782\": container with ID starting with c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782 not found: ID does not exist" containerID="c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.840489 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782"} err="failed to get container status \"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782\": rpc error: code = NotFound desc = could not find container \"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782\": container with ID starting with c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782 not found: ID does not exist" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.878037 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.077713 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.089269 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.110057 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:58 crc kubenswrapper[4942]: E0218 19:38:58.110645 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="setup-container" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.110677 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="setup-container" Feb 18 19:38:58 crc kubenswrapper[4942]: E0218 19:38:58.110712 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.110723 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.111428 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.114025 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.116384 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.116863 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.117185 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.117377 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.117462 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.119449 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wp8g5" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.121869 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.130149 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184050 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184276 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc294c27-1cd0-4930-8f8d-efe5d0127708-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184423 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184512 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc294c27-1cd0-4930-8f8d-efe5d0127708-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184707 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgrg\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-kube-api-access-9jgrg\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.185148 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.185273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.185407 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.185526 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287106 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287159 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgrg\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-kube-api-access-9jgrg\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287204 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287246 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287300 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287322 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287380 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287451 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc294c27-1cd0-4930-8f8d-efe5d0127708-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287498 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287499 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287522 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287545 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc294c27-1cd0-4930-8f8d-efe5d0127708-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.288352 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.288373 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.288677 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.288991 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.289166 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.292187 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.296828 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc294c27-1cd0-4930-8f8d-efe5d0127708-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.296847 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc294c27-1cd0-4930-8f8d-efe5d0127708-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.297198 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.304546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgrg\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-kube-api-access-9jgrg\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.325597 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.438329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.756888 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42559616-368c-4628-8d82-75bfc94dcbaf","Type":"ContainerStarted","Data":"d03fa74286e0ba9f49dc010e97e62f872ca0134167eef531b9ebdfccd9d98ca4"} Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.928819 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:59 crc kubenswrapper[4942]: I0218 19:38:59.048017 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b41292-c562-4964-bb25-d8945415b3da" path="/var/lib/kubelet/pods/b6b41292-c562-4964-bb25-d8945415b3da/volumes" Feb 18 19:38:59 crc kubenswrapper[4942]: I0218 19:38:59.798866 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc294c27-1cd0-4930-8f8d-efe5d0127708","Type":"ContainerStarted","Data":"77a10bf3a2a906137c236a38ae42f6dba5df2b51344c9966142ff8d18c6e0d91"} Feb 18 19:38:59 crc kubenswrapper[4942]: I0218 19:38:59.811550 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42559616-368c-4628-8d82-75bfc94dcbaf","Type":"ContainerStarted","Data":"b1fb5a34758585bcfa34bf0a4375593df263351ddd74d3023184593f9a1853a7"} Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.564696 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.567101 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.570708 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.584935 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747474 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747552 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747637 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747656 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747680 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747948 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.823450 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc294c27-1cd0-4930-8f8d-efe5d0127708","Type":"ContainerStarted","Data":"c70066480e0fa5885f66e21f6662d6bce6b0511882161ee08aeff8aefdb90858"} Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850419 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850487 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850559 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850598 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850623 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850650 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850699 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.851435 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.851529 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.851642 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.851642 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.852218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.852262 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.879176 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.891404 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:01 crc kubenswrapper[4942]: I0218 19:39:01.373116 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:01 crc kubenswrapper[4942]: W0218 19:39:01.375036 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97a40a08_ea01_4347_90ed_4b250d289c34.slice/crio-59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d WatchSource:0}: Error finding container 59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d: Status 404 returned error can't find the container with id 59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d Feb 18 19:39:01 crc kubenswrapper[4942]: I0218 19:39:01.833381 4942 generic.go:334] "Generic (PLEG): container finished" podID="97a40a08-ea01-4347-90ed-4b250d289c34" containerID="b2da2cadf4d9d0ef1f345f8cd9fb7b2bb034daca3e383be305fa0578f8000df3" exitCode=0 Feb 18 19:39:01 crc kubenswrapper[4942]: I0218 19:39:01.833457 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerDied","Data":"b2da2cadf4d9d0ef1f345f8cd9fb7b2bb034daca3e383be305fa0578f8000df3"} Feb 18 19:39:01 crc kubenswrapper[4942]: I0218 19:39:01.833505 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerStarted","Data":"59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d"} Feb 18 19:39:02 crc kubenswrapper[4942]: I0218 19:39:02.327063 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: i/o timeout" Feb 18 19:39:02 crc kubenswrapper[4942]: I0218 19:39:02.846045 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerStarted","Data":"548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3"} Feb 18 19:39:02 crc kubenswrapper[4942]: I0218 19:39:02.846191 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:02 crc kubenswrapper[4942]: I0218 19:39:02.884780 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" podStartSLOduration=2.884750548 podStartE2EDuration="2.884750548s" podCreationTimestamp="2026-02-18 19:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:02.878683225 +0000 UTC m=+1302.583615890" watchObservedRunningTime="2026-02-18 19:39:02.884750548 +0000 UTC m=+1302.589683213" Feb 18 19:39:10 crc kubenswrapper[4942]: I0218 19:39:10.894520 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.000821 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.001176 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="dnsmasq-dns" containerID="cri-o://59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36" gracePeriod=10 Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.450656 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-7bbf4"] Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.453296 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.474490 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-7bbf4"] Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.631678 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.631785 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632448 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632527 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632683 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndv7z\" (UniqueName: \"kubernetes.io/projected/f492d190-cab0-4416-b2c4-46d4485d89e0-kube-api-access-ndv7z\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632838 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-config\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632888 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.734833 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.734967 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735036 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735217 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndv7z\" (UniqueName: \"kubernetes.io/projected/f492d190-cab0-4416-b2c4-46d4485d89e0-kube-api-access-ndv7z\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735278 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-config\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735302 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.736701 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.736830 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-config\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.736972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.737037 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.737229 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.737302 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.769224 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndv7z\" (UniqueName: \"kubernetes.io/projected/f492d190-cab0-4416-b2c4-46d4485d89e0-kube-api-access-ndv7z\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.781335 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.021918 4942 generic.go:334] "Generic (PLEG): container finished" podID="7097c36f-c705-4a21-be80-ea057d24ace8" containerID="59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36" exitCode=0 Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.022216 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerDied","Data":"59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36"} Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.117740 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.267948 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.267989 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.268124 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.268216 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.268233 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.268294 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.293732 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8" (OuterVolumeSpecName: "kube-api-access-nkld8") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "kube-api-access-nkld8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.332622 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.346792 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.348233 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.353698 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.354880 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config" (OuterVolumeSpecName: "config") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370419 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370458 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370468 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370476 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370486 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370495 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.454920 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-7bbf4"] Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.034643 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.034634 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerDied","Data":"5c67289996fab91f1e19ef4b863aed3cd05ec958251ed161ac176da9f1432384"} Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.035232 4942 scope.go:117] "RemoveContainer" containerID="59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36" Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.036915 4942 generic.go:334] "Generic (PLEG): container finished" podID="f492d190-cab0-4416-b2c4-46d4485d89e0" containerID="18c444ebc7d363aec8c613d09feb7cdf24dd60cc24c0375c4813030aa1ffc1f5" exitCode=0 Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.051668 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" event={"ID":"f492d190-cab0-4416-b2c4-46d4485d89e0","Type":"ContainerDied","Data":"18c444ebc7d363aec8c613d09feb7cdf24dd60cc24c0375c4813030aa1ffc1f5"} Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.051704 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" event={"ID":"f492d190-cab0-4416-b2c4-46d4485d89e0","Type":"ContainerStarted","Data":"02af4d14c52016aa2562402100c28c6c3c0217a21b01e377e0cc017044202514"} Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.061397 4942 scope.go:117] "RemoveContainer" containerID="81cc4bd58d4674e6299bf3f92627b59ac247ba15bf8a7017013a911bae4a12c5" Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.101872 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.132436 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:39:14 crc kubenswrapper[4942]: I0218 19:39:14.048419 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" event={"ID":"f492d190-cab0-4416-b2c4-46d4485d89e0","Type":"ContainerStarted","Data":"4bae647aa7b7d2b4de867c1c258980055290ece36f7258a7e17a12434075b909"} Feb 18 19:39:14 crc kubenswrapper[4942]: I0218 19:39:14.049751 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:14 crc kubenswrapper[4942]: I0218 19:39:14.073368 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" podStartSLOduration=3.073341459 podStartE2EDuration="3.073341459s" podCreationTimestamp="2026-02-18 19:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:14.064262615 +0000 UTC m=+1313.769195280" watchObservedRunningTime="2026-02-18 19:39:14.073341459 +0000 UTC m=+1313.778274124" Feb 18 19:39:15 crc kubenswrapper[4942]: I0218 19:39:15.052872 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" path="/var/lib/kubelet/pods/7097c36f-c705-4a21-be80-ea057d24ace8/volumes" Feb 18 19:39:21 crc kubenswrapper[4942]: I0218 19:39:21.784093 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:21 crc kubenswrapper[4942]: I0218 19:39:21.860551 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:21 crc kubenswrapper[4942]: I0218 19:39:21.860859 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="dnsmasq-dns" containerID="cri-o://548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3" gracePeriod=10 Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.137049 4942 generic.go:334] "Generic (PLEG): container finished" podID="97a40a08-ea01-4347-90ed-4b250d289c34" containerID="548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3" exitCode=0 Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.137159 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerDied","Data":"548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3"} Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.375183 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.476657 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.476944 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477031 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477172 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477204 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477259 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477291 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.495955 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8" (OuterVolumeSpecName: "kube-api-access-52hb8") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "kube-api-access-52hb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.533167 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.549279 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.555406 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.559479 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config" (OuterVolumeSpecName: "config") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.561237 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.567480 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579535 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579632 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579692 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579748 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579844 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579896 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579943 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.148888 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerDied","Data":"59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d"} Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.148944 4942 scope.go:117] "RemoveContainer" containerID="548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3" Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.148977 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.176863 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.186143 4942 scope.go:117] "RemoveContainer" containerID="b2da2cadf4d9d0ef1f345f8cd9fb7b2bb034daca3e383be305fa0578f8000df3" Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.187448 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:25 crc kubenswrapper[4942]: I0218 19:39:25.051154 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" path="/var/lib/kubelet/pods/97a40a08-ea01-4347-90ed-4b250d289c34/volumes" Feb 18 19:39:32 crc kubenswrapper[4942]: I0218 19:39:32.239812 4942 generic.go:334] "Generic (PLEG): container finished" podID="42559616-368c-4628-8d82-75bfc94dcbaf" containerID="b1fb5a34758585bcfa34bf0a4375593df263351ddd74d3023184593f9a1853a7" exitCode=0 Feb 18 19:39:32 crc kubenswrapper[4942]: I0218 19:39:32.239916 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42559616-368c-4628-8d82-75bfc94dcbaf","Type":"ContainerDied","Data":"b1fb5a34758585bcfa34bf0a4375593df263351ddd74d3023184593f9a1853a7"} Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.249657 4942 generic.go:334] "Generic (PLEG): container finished" podID="cc294c27-1cd0-4930-8f8d-efe5d0127708" containerID="c70066480e0fa5885f66e21f6662d6bce6b0511882161ee08aeff8aefdb90858" exitCode=0 Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.249757 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc294c27-1cd0-4930-8f8d-efe5d0127708","Type":"ContainerDied","Data":"c70066480e0fa5885f66e21f6662d6bce6b0511882161ee08aeff8aefdb90858"} Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.253200 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42559616-368c-4628-8d82-75bfc94dcbaf","Type":"ContainerStarted","Data":"02b9627c8348b2f40510e9014b2215f9a89cda25fb40c87099392acac7054104"} Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.253430 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.320811 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.320789437 podStartE2EDuration="37.320789437s" podCreationTimestamp="2026-02-18 19:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:33.314146709 +0000 UTC m=+1333.019079394" watchObservedRunningTime="2026-02-18 19:39:33.320789437 +0000 UTC m=+1333.025722112" Feb 18 19:39:34 crc kubenswrapper[4942]: I0218 19:39:34.264900 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc294c27-1cd0-4930-8f8d-efe5d0127708","Type":"ContainerStarted","Data":"247bd27b9e9a99eb506e4c094a8cd2257ecc949b2b78d1822aebc24397327dcb"} Feb 18 19:39:34 crc kubenswrapper[4942]: I0218 19:39:34.265454 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:39:34 crc kubenswrapper[4942]: I0218 19:39:34.294062 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.294038307 podStartE2EDuration="36.294038307s" podCreationTimestamp="2026-02-18 19:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:34.287974504 +0000 UTC m=+1333.992907249" watchObservedRunningTime="2026-02-18 19:39:34.294038307 +0000 UTC m=+1333.998970972" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.344921 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk"] Feb 18 19:39:35 crc kubenswrapper[4942]: E0218 19:39:35.345688 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="init" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.345709 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="init" Feb 18 19:39:35 crc kubenswrapper[4942]: E0218 19:39:35.345745 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.345753 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: E0218 19:39:35.345787 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.345795 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: E0218 19:39:35.345817 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="init" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.345824 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="init" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.346058 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.346088 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.346938 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.348592 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.349048 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.349313 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.349592 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.360872 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk"] Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.442307 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.442405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.442501 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.442569 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.544821 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.545183 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.545363 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.545550 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.551347 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.551387 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.553460 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.564806 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.672092 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:36 crc kubenswrapper[4942]: I0218 19:39:36.358416 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk"] Feb 18 19:39:37 crc kubenswrapper[4942]: I0218 19:39:37.289008 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" event={"ID":"d972a9f6-b2f0-46db-a51b-b47575ff72d6","Type":"ContainerStarted","Data":"45552895c35ce022126d6d4ac53308b70b4fce8a439abac0a760fc1143a57ced"} Feb 18 19:39:47 crc kubenswrapper[4942]: I0218 19:39:47.165052 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:39:47 crc kubenswrapper[4942]: I0218 19:39:47.405589 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" event={"ID":"d972a9f6-b2f0-46db-a51b-b47575ff72d6","Type":"ContainerStarted","Data":"90a3f88d42ea2500c4fc51fdcef0e53b011217e4249ea0aff3f311beebd6fa7f"} Feb 18 19:39:47 crc kubenswrapper[4942]: I0218 19:39:47.436530 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" podStartSLOduration=2.057121928 podStartE2EDuration="12.436503544s" podCreationTimestamp="2026-02-18 19:39:35 +0000 UTC" firstStartedPulling="2026-02-18 19:39:36.369207912 +0000 UTC m=+1336.074140577" lastFinishedPulling="2026-02-18 19:39:46.748589538 +0000 UTC m=+1346.453522193" observedRunningTime="2026-02-18 19:39:47.429899208 +0000 UTC m=+1347.134831873" watchObservedRunningTime="2026-02-18 19:39:47.436503544 +0000 UTC m=+1347.141436219" Feb 18 19:39:48 crc kubenswrapper[4942]: I0218 19:39:48.441992 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:39:57 crc kubenswrapper[4942]: I0218 19:39:57.503555 4942 generic.go:334] "Generic (PLEG): container finished" podID="d972a9f6-b2f0-46db-a51b-b47575ff72d6" containerID="90a3f88d42ea2500c4fc51fdcef0e53b011217e4249ea0aff3f311beebd6fa7f" exitCode=0 Feb 18 19:39:57 crc kubenswrapper[4942]: I0218 19:39:57.503604 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" event={"ID":"d972a9f6-b2f0-46db-a51b-b47575ff72d6","Type":"ContainerDied","Data":"90a3f88d42ea2500c4fc51fdcef0e53b011217e4249ea0aff3f311beebd6fa7f"} Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.040045 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.162220 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") pod \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.162330 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") pod \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.162460 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") pod \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.162689 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") pod \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.170242 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh" (OuterVolumeSpecName: "kube-api-access-6wvhh") pod "d972a9f6-b2f0-46db-a51b-b47575ff72d6" (UID: "d972a9f6-b2f0-46db-a51b-b47575ff72d6"). InnerVolumeSpecName "kube-api-access-6wvhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.171430 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d972a9f6-b2f0-46db-a51b-b47575ff72d6" (UID: "d972a9f6-b2f0-46db-a51b-b47575ff72d6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.211423 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d972a9f6-b2f0-46db-a51b-b47575ff72d6" (UID: "d972a9f6-b2f0-46db-a51b-b47575ff72d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.223026 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory" (OuterVolumeSpecName: "inventory") pod "d972a9f6-b2f0-46db-a51b-b47575ff72d6" (UID: "d972a9f6-b2f0-46db-a51b-b47575ff72d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.265124 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.265162 4942 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.265180 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.265193 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.533429 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" event={"ID":"d972a9f6-b2f0-46db-a51b-b47575ff72d6","Type":"ContainerDied","Data":"45552895c35ce022126d6d4ac53308b70b4fce8a439abac0a760fc1143a57ced"} Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.533500 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45552895c35ce022126d6d4ac53308b70b4fce8a439abac0a760fc1143a57ced" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.533511 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.629225 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd"] Feb 18 19:39:59 crc kubenswrapper[4942]: E0218 19:39:59.630268 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d972a9f6-b2f0-46db-a51b-b47575ff72d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.630302 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d972a9f6-b2f0-46db-a51b-b47575ff72d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.630828 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d972a9f6-b2f0-46db-a51b-b47575ff72d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.632048 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.634147 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.634357 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.634750 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.635493 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.640829 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd"] Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.672514 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.672596 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.672635 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.773910 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.773985 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.774024 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.778819 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.778964 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.791012 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.970997 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:40:00 crc kubenswrapper[4942]: I0218 19:40:00.509383 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd"] Feb 18 19:40:00 crc kubenswrapper[4942]: W0218 19:40:00.511207 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a63da15_6b13_4c7b_bf83_2a4685ada3cf.slice/crio-6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c WatchSource:0}: Error finding container 6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c: Status 404 returned error can't find the container with id 6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c Feb 18 19:40:00 crc kubenswrapper[4942]: I0218 19:40:00.544200 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" event={"ID":"0a63da15-6b13-4c7b-bf83-2a4685ada3cf","Type":"ContainerStarted","Data":"6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c"} Feb 18 19:40:01 crc kubenswrapper[4942]: I0218 19:40:01.555510 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" event={"ID":"0a63da15-6b13-4c7b-bf83-2a4685ada3cf","Type":"ContainerStarted","Data":"0dd99c4d846e38d3cc34b10490e5b4c7a913dec7ca68bf37c382b2d307c2c33f"} Feb 18 19:40:01 crc kubenswrapper[4942]: I0218 19:40:01.583461 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" podStartSLOduration=2.161554366 podStartE2EDuration="2.583424707s" podCreationTimestamp="2026-02-18 19:39:59 +0000 UTC" firstStartedPulling="2026-02-18 19:40:00.513095696 +0000 UTC m=+1360.218028361" lastFinishedPulling="2026-02-18 19:40:00.934966037 +0000 UTC m=+1360.639898702" observedRunningTime="2026-02-18 19:40:01.57680657 +0000 UTC m=+1361.281739235" watchObservedRunningTime="2026-02-18 19:40:01.583424707 +0000 UTC m=+1361.288357412" Feb 18 19:40:04 crc kubenswrapper[4942]: I0218 19:40:04.589086 4942 generic.go:334] "Generic (PLEG): container finished" podID="0a63da15-6b13-4c7b-bf83-2a4685ada3cf" containerID="0dd99c4d846e38d3cc34b10490e5b4c7a913dec7ca68bf37c382b2d307c2c33f" exitCode=0 Feb 18 19:40:04 crc kubenswrapper[4942]: I0218 19:40:04.589169 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" event={"ID":"0a63da15-6b13-4c7b-bf83-2a4685ada3cf","Type":"ContainerDied","Data":"0dd99c4d846e38d3cc34b10490e5b4c7a913dec7ca68bf37c382b2d307c2c33f"} Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.405565 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.522419 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") pod \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.522615 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") pod \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.522745 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") pod \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.531581 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g" (OuterVolumeSpecName: "kube-api-access-qxq2g") pod "0a63da15-6b13-4c7b-bf83-2a4685ada3cf" (UID: "0a63da15-6b13-4c7b-bf83-2a4685ada3cf"). InnerVolumeSpecName "kube-api-access-qxq2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.557292 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory" (OuterVolumeSpecName: "inventory") pod "0a63da15-6b13-4c7b-bf83-2a4685ada3cf" (UID: "0a63da15-6b13-4c7b-bf83-2a4685ada3cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.557869 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a63da15-6b13-4c7b-bf83-2a4685ada3cf" (UID: "0a63da15-6b13-4c7b-bf83-2a4685ada3cf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.608296 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" event={"ID":"0a63da15-6b13-4c7b-bf83-2a4685ada3cf","Type":"ContainerDied","Data":"6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c"} Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.608342 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.608352 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.625702 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.625929 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.626028 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.785620 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk"] Feb 18 19:40:06 crc kubenswrapper[4942]: E0218 19:40:06.786043 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a63da15-6b13-4c7b-bf83-2a4685ada3cf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.786060 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a63da15-6b13-4c7b-bf83-2a4685ada3cf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.786234 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a63da15-6b13-4c7b-bf83-2a4685ada3cf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.786928 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.788960 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.789479 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.789643 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.790074 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.803926 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk"] Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.931110 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.931487 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.931663 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.931810 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.033386 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.033860 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.033962 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.034072 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.038208 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.040595 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.041787 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.051352 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.105378 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.645973 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk"] Feb 18 19:40:07 crc kubenswrapper[4942]: W0218 19:40:07.647182 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8509499d_4716_44d6_8fb9_539350f38310.slice/crio-fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8 WatchSource:0}: Error finding container fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8: Status 404 returned error can't find the container with id fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8 Feb 18 19:40:08 crc kubenswrapper[4942]: I0218 19:40:08.649826 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" event={"ID":"8509499d-4716-44d6-8fb9-539350f38310","Type":"ContainerStarted","Data":"2739ca309f28eed92bcb31bf2162f38ddc41005b410bf64c078594e3f919f697"} Feb 18 19:40:08 crc kubenswrapper[4942]: I0218 19:40:08.650171 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" event={"ID":"8509499d-4716-44d6-8fb9-539350f38310","Type":"ContainerStarted","Data":"fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8"} Feb 18 19:40:08 crc kubenswrapper[4942]: I0218 19:40:08.674573 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" podStartSLOduration=2.268181826 podStartE2EDuration="2.674550992s" podCreationTimestamp="2026-02-18 19:40:06 +0000 UTC" firstStartedPulling="2026-02-18 19:40:07.651696903 +0000 UTC m=+1367.356629568" lastFinishedPulling="2026-02-18 19:40:08.058066069 +0000 UTC m=+1367.762998734" observedRunningTime="2026-02-18 19:40:08.666005953 +0000 UTC m=+1368.370938628" watchObservedRunningTime="2026-02-18 19:40:08.674550992 +0000 UTC m=+1368.379483657" Feb 18 19:40:46 crc kubenswrapper[4942]: I0218 19:40:46.752154 4942 scope.go:117] "RemoveContainer" containerID="3ca7995811727ed16b81c6dacf4b796cf8cb865100445c8661ce6034aba901d3" Feb 18 19:40:46 crc kubenswrapper[4942]: I0218 19:40:46.780058 4942 scope.go:117] "RemoveContainer" containerID="91775cfa347502e2c1757de451b7156448b5de2986ec185b6afdfe4b5a592293" Feb 18 19:40:46 crc kubenswrapper[4942]: I0218 19:40:46.864070 4942 scope.go:117] "RemoveContainer" containerID="7d25a210ee23b71ffe8e6422d5c4b01d726dcdfde682e5219625754a6f1f5d53" Feb 18 19:40:46 crc kubenswrapper[4942]: I0218 19:40:46.895894 4942 scope.go:117] "RemoveContainer" containerID="f762c8a9d2890b0c6a5aa76b7b4d8dbd055509fafd584287df55f4c0629feaed" Feb 18 19:40:53 crc kubenswrapper[4942]: I0218 19:40:53.741342 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:40:53 crc kubenswrapper[4942]: I0218 19:40:53.741952 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:41:23 crc kubenswrapper[4942]: I0218 19:41:23.741219 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:41:23 crc kubenswrapper[4942]: I0218 19:41:23.742058 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:41:47 crc kubenswrapper[4942]: I0218 19:41:47.035216 4942 scope.go:117] "RemoveContainer" containerID="fa114cb799909584016955a551d4df04e20f11df9588933ed8a958c11cc58031" Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.741536 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.742424 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.742499 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.743688 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.743872 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf" gracePeriod=600 Feb 18 19:41:54 crc kubenswrapper[4942]: I0218 19:41:54.845674 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf" exitCode=0 Feb 18 19:41:54 crc kubenswrapper[4942]: I0218 19:41:54.845814 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf"} Feb 18 19:41:54 crc kubenswrapper[4942]: I0218 19:41:54.846121 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19"} Feb 18 19:41:54 crc kubenswrapper[4942]: I0218 19:41:54.846144 4942 scope.go:117] "RemoveContainer" containerID="8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.195004 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.200480 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.211616 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.330392 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.330607 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.330659 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.432564 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.432679 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.432698 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.433109 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.433126 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.454445 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.557578 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:58 crc kubenswrapper[4942]: I0218 19:41:58.097496 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:41:58 crc kubenswrapper[4942]: I0218 19:41:58.894080 4942 generic.go:334] "Generic (PLEG): container finished" podID="f7384611-597a-411c-b993-aa5e957d2a22" containerID="55ced6eed1f9da2ba0f67e272103f463ac07d013a256f2c716d9a43678f69d7c" exitCode=0 Feb 18 19:41:58 crc kubenswrapper[4942]: I0218 19:41:58.894143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerDied","Data":"55ced6eed1f9da2ba0f67e272103f463ac07d013a256f2c716d9a43678f69d7c"} Feb 18 19:41:58 crc kubenswrapper[4942]: I0218 19:41:58.894414 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerStarted","Data":"6b2ebaf7f10e3d90236eb75e4cc0f96710547637aa67ed7d29f74873591e1d4b"} Feb 18 19:42:00 crc kubenswrapper[4942]: I0218 19:42:00.920453 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerStarted","Data":"b584ebb67a6c0c28731b826136ac25eca0c9764c757bbe199ab65855e3c1858e"} Feb 18 19:42:01 crc kubenswrapper[4942]: I0218 19:42:01.931196 4942 generic.go:334] "Generic (PLEG): container finished" podID="f7384611-597a-411c-b993-aa5e957d2a22" containerID="b584ebb67a6c0c28731b826136ac25eca0c9764c757bbe199ab65855e3c1858e" exitCode=0 Feb 18 19:42:01 crc kubenswrapper[4942]: I0218 19:42:01.931262 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerDied","Data":"b584ebb67a6c0c28731b826136ac25eca0c9764c757bbe199ab65855e3c1858e"} Feb 18 19:42:01 crc kubenswrapper[4942]: I0218 19:42:01.934101 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:42:02 crc kubenswrapper[4942]: I0218 19:42:02.945705 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerStarted","Data":"8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720"} Feb 18 19:42:02 crc kubenswrapper[4942]: I0218 19:42:02.971820 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4t4b" podStartSLOduration=2.474420436 podStartE2EDuration="5.971798576s" podCreationTimestamp="2026-02-18 19:41:57 +0000 UTC" firstStartedPulling="2026-02-18 19:41:58.897134799 +0000 UTC m=+1478.602067464" lastFinishedPulling="2026-02-18 19:42:02.394512899 +0000 UTC m=+1482.099445604" observedRunningTime="2026-02-18 19:42:02.964269427 +0000 UTC m=+1482.669202122" watchObservedRunningTime="2026-02-18 19:42:02.971798576 +0000 UTC m=+1482.676731241" Feb 18 19:42:03 crc kubenswrapper[4942]: I0218 19:42:03.982156 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:03 crc kubenswrapper[4942]: I0218 19:42:03.989631 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.000519 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.197233 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.197490 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.197528 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.299565 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.299615 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.299678 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.300096 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.300161 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.320579 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.612123 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:05 crc kubenswrapper[4942]: I0218 19:42:05.103924 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:05 crc kubenswrapper[4942]: W0218 19:42:05.112906 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd83d3c09_6dc0_4ed4_adc3_183e76a548fa.slice/crio-8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651 WatchSource:0}: Error finding container 8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651: Status 404 returned error can't find the container with id 8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651 Feb 18 19:42:05 crc kubenswrapper[4942]: I0218 19:42:05.975895 4942 generic.go:334] "Generic (PLEG): container finished" podID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerID="068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98" exitCode=0 Feb 18 19:42:05 crc kubenswrapper[4942]: I0218 19:42:05.976072 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerDied","Data":"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98"} Feb 18 19:42:05 crc kubenswrapper[4942]: I0218 19:42:05.976192 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerStarted","Data":"8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651"} Feb 18 19:42:07 crc kubenswrapper[4942]: I0218 19:42:07.557691 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:07 crc kubenswrapper[4942]: I0218 19:42:07.557991 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:07 crc kubenswrapper[4942]: I0218 19:42:07.634395 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:07 crc kubenswrapper[4942]: I0218 19:42:07.999832 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerStarted","Data":"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f"} Feb 18 19:42:08 crc kubenswrapper[4942]: I0218 19:42:08.073440 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:09 crc kubenswrapper[4942]: I0218 19:42:09.370185 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:42:10 crc kubenswrapper[4942]: I0218 19:42:10.025571 4942 generic.go:334] "Generic (PLEG): container finished" podID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerID="b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f" exitCode=0 Feb 18 19:42:10 crc kubenswrapper[4942]: I0218 19:42:10.025661 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerDied","Data":"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f"} Feb 18 19:42:10 crc kubenswrapper[4942]: I0218 19:42:10.025916 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j4t4b" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="registry-server" containerID="cri-o://8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720" gracePeriod=2 Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.053704 4942 generic.go:334] "Generic (PLEG): container finished" podID="f7384611-597a-411c-b993-aa5e957d2a22" containerID="8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720" exitCode=0 Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.057709 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerDied","Data":"8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720"} Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.397787 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.474723 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") pod \"f7384611-597a-411c-b993-aa5e957d2a22\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.474857 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") pod \"f7384611-597a-411c-b993-aa5e957d2a22\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.474943 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") pod \"f7384611-597a-411c-b993-aa5e957d2a22\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.476567 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities" (OuterVolumeSpecName: "utilities") pod "f7384611-597a-411c-b993-aa5e957d2a22" (UID: "f7384611-597a-411c-b993-aa5e957d2a22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.480560 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62" (OuterVolumeSpecName: "kube-api-access-vnn62") pod "f7384611-597a-411c-b993-aa5e957d2a22" (UID: "f7384611-597a-411c-b993-aa5e957d2a22"). InnerVolumeSpecName "kube-api-access-vnn62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.535936 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7384611-597a-411c-b993-aa5e957d2a22" (UID: "f7384611-597a-411c-b993-aa5e957d2a22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.576460 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.576493 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.576503 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.065970 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerStarted","Data":"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4"} Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.068308 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerDied","Data":"6b2ebaf7f10e3d90236eb75e4cc0f96710547637aa67ed7d29f74873591e1d4b"} Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.068345 4942 scope.go:117] "RemoveContainer" containerID="8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.068459 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.096906 4942 scope.go:117] "RemoveContainer" containerID="b584ebb67a6c0c28731b826136ac25eca0c9764c757bbe199ab65855e3c1858e" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.107585 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p8sbc" podStartSLOduration=4.155179566 podStartE2EDuration="9.107564274s" podCreationTimestamp="2026-02-18 19:42:03 +0000 UTC" firstStartedPulling="2026-02-18 19:42:05.978060008 +0000 UTC m=+1485.682992713" lastFinishedPulling="2026-02-18 19:42:10.930444756 +0000 UTC m=+1490.635377421" observedRunningTime="2026-02-18 19:42:12.097218071 +0000 UTC m=+1491.802150786" watchObservedRunningTime="2026-02-18 19:42:12.107564274 +0000 UTC m=+1491.812496939" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.132701 4942 scope.go:117] "RemoveContainer" containerID="55ced6eed1f9da2ba0f67e272103f463ac07d013a256f2c716d9a43678f69d7c" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.132941 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.146278 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:42:13 crc kubenswrapper[4942]: I0218 19:42:13.048318 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7384611-597a-411c-b993-aa5e957d2a22" path="/var/lib/kubelet/pods/f7384611-597a-411c-b993-aa5e957d2a22/volumes" Feb 18 19:42:14 crc kubenswrapper[4942]: I0218 19:42:14.612714 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:14 crc kubenswrapper[4942]: I0218 19:42:14.612772 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:15 crc kubenswrapper[4942]: I0218 19:42:15.662633 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p8sbc" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" probeResult="failure" output=< Feb 18 19:42:15 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:42:15 crc kubenswrapper[4942]: > Feb 18 19:42:24 crc kubenswrapper[4942]: I0218 19:42:24.696208 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:24 crc kubenswrapper[4942]: I0218 19:42:24.763988 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:24 crc kubenswrapper[4942]: I0218 19:42:24.939149 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.200626 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p8sbc" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" containerID="cri-o://48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" gracePeriod=2 Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.685292 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.800167 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") pod \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.800381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") pod \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.800535 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") pod \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.801147 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities" (OuterVolumeSpecName: "utilities") pod "d83d3c09-6dc0-4ed4-adc3-183e76a548fa" (UID: "d83d3c09-6dc0-4ed4-adc3-183e76a548fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.809067 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x" (OuterVolumeSpecName: "kube-api-access-m746x") pod "d83d3c09-6dc0-4ed4-adc3-183e76a548fa" (UID: "d83d3c09-6dc0-4ed4-adc3-183e76a548fa"). InnerVolumeSpecName "kube-api-access-m746x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.902506 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.902541 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.927292 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d83d3c09-6dc0-4ed4-adc3-183e76a548fa" (UID: "d83d3c09-6dc0-4ed4-adc3-183e76a548fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.005055 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.216904 4942 generic.go:334] "Generic (PLEG): container finished" podID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerID="48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" exitCode=0 Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.216980 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerDied","Data":"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4"} Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.217077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerDied","Data":"8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651"} Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.217110 4942 scope.go:117] "RemoveContainer" containerID="48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.217010 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.253355 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.256702 4942 scope.go:117] "RemoveContainer" containerID="b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.266105 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.295139 4942 scope.go:117] "RemoveContainer" containerID="068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.339173 4942 scope.go:117] "RemoveContainer" containerID="48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" Feb 18 19:42:27 crc kubenswrapper[4942]: E0218 19:42:27.339637 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4\": container with ID starting with 48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4 not found: ID does not exist" containerID="48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.339669 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4"} err="failed to get container status \"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4\": rpc error: code = NotFound desc = could not find container \"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4\": container with ID starting with 48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4 not found: ID does not exist" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.339690 4942 scope.go:117] "RemoveContainer" containerID="b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f" Feb 18 19:42:27 crc kubenswrapper[4942]: E0218 19:42:27.340039 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f\": container with ID starting with b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f not found: ID does not exist" containerID="b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.340066 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f"} err="failed to get container status \"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f\": rpc error: code = NotFound desc = could not find container \"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f\": container with ID starting with b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f not found: ID does not exist" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.340084 4942 scope.go:117] "RemoveContainer" containerID="068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98" Feb 18 19:42:27 crc kubenswrapper[4942]: E0218 19:42:27.340538 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98\": container with ID starting with 068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98 not found: ID does not exist" containerID="068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.340607 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98"} err="failed to get container status \"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98\": rpc error: code = NotFound desc = could not find container \"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98\": container with ID starting with 068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98 not found: ID does not exist" Feb 18 19:42:29 crc kubenswrapper[4942]: I0218 19:42:29.057568 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" path="/var/lib/kubelet/pods/d83d3c09-6dc0-4ed4-adc3-183e76a548fa/volumes" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.865783 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867171 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="extract-content" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867193 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="extract-content" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867214 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="extract-utilities" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867225 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="extract-utilities" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867250 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867261 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867276 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="extract-utilities" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867287 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="extract-utilities" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867314 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867324 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867348 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="extract-content" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867358 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="extract-content" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867619 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867668 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.869842 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.877649 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.991720 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.991997 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.992138 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.093865 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.094004 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.094034 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.094483 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.094505 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.115852 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.220539 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.732247 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:46 crc kubenswrapper[4942]: I0218 19:42:46.392564 4942 generic.go:334] "Generic (PLEG): container finished" podID="e8d8c3f0-9548-40e6-8504-24d707672276" containerID="ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2" exitCode=0 Feb 18 19:42:46 crc kubenswrapper[4942]: I0218 19:42:46.392631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerDied","Data":"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2"} Feb 18 19:42:46 crc kubenswrapper[4942]: I0218 19:42:46.393181 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerStarted","Data":"0ba7b0673b4b918e03a702622af24ec80f1948edb0f2b842c82a117574bfbca8"} Feb 18 19:42:48 crc kubenswrapper[4942]: I0218 19:42:48.416650 4942 generic.go:334] "Generic (PLEG): container finished" podID="e8d8c3f0-9548-40e6-8504-24d707672276" containerID="fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2" exitCode=0 Feb 18 19:42:48 crc kubenswrapper[4942]: I0218 19:42:48.416853 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerDied","Data":"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2"} Feb 18 19:42:49 crc kubenswrapper[4942]: I0218 19:42:49.428208 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerStarted","Data":"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659"} Feb 18 19:42:49 crc kubenswrapper[4942]: I0218 19:42:49.464522 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rcsvv" podStartSLOduration=2.966293377 podStartE2EDuration="5.464503056s" podCreationTimestamp="2026-02-18 19:42:44 +0000 UTC" firstStartedPulling="2026-02-18 19:42:46.394987818 +0000 UTC m=+1526.099920483" lastFinishedPulling="2026-02-18 19:42:48.893197497 +0000 UTC m=+1528.598130162" observedRunningTime="2026-02-18 19:42:49.455058548 +0000 UTC m=+1529.159991233" watchObservedRunningTime="2026-02-18 19:42:49.464503056 +0000 UTC m=+1529.169435721" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.221017 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.221396 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.284895 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.585253 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.653311 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:57 crc kubenswrapper[4942]: I0218 19:42:57.511375 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rcsvv" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="registry-server" containerID="cri-o://243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" gracePeriod=2 Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.065672 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.165744 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") pod \"e8d8c3f0-9548-40e6-8504-24d707672276\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.165898 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") pod \"e8d8c3f0-9548-40e6-8504-24d707672276\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.165964 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") pod \"e8d8c3f0-9548-40e6-8504-24d707672276\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.167137 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities" (OuterVolumeSpecName: "utilities") pod "e8d8c3f0-9548-40e6-8504-24d707672276" (UID: "e8d8c3f0-9548-40e6-8504-24d707672276"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.174087 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p" (OuterVolumeSpecName: "kube-api-access-rv55p") pod "e8d8c3f0-9548-40e6-8504-24d707672276" (UID: "e8d8c3f0-9548-40e6-8504-24d707672276"). InnerVolumeSpecName "kube-api-access-rv55p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.227803 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8d8c3f0-9548-40e6-8504-24d707672276" (UID: "e8d8c3f0-9548-40e6-8504-24d707672276"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.268698 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.268740 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.268752 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.527890 4942 generic.go:334] "Generic (PLEG): container finished" podID="e8d8c3f0-9548-40e6-8504-24d707672276" containerID="243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" exitCode=0 Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.527988 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerDied","Data":"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659"} Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.527979 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.528053 4942 scope.go:117] "RemoveContainer" containerID="243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.528035 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerDied","Data":"0ba7b0673b4b918e03a702622af24ec80f1948edb0f2b842c82a117574bfbca8"} Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.557742 4942 scope.go:117] "RemoveContainer" containerID="fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.585864 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.595038 4942 scope.go:117] "RemoveContainer" containerID="ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.600813 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642079 4942 scope.go:117] "RemoveContainer" containerID="243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" Feb 18 19:42:58 crc kubenswrapper[4942]: E0218 19:42:58.642516 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659\": container with ID starting with 243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659 not found: ID does not exist" containerID="243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642557 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659"} err="failed to get container status \"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659\": rpc error: code = NotFound desc = could not find container \"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659\": container with ID starting with 243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659 not found: ID does not exist" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642583 4942 scope.go:117] "RemoveContainer" containerID="fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2" Feb 18 19:42:58 crc kubenswrapper[4942]: E0218 19:42:58.642951 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2\": container with ID starting with fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2 not found: ID does not exist" containerID="fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642979 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2"} err="failed to get container status \"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2\": rpc error: code = NotFound desc = could not find container \"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2\": container with ID starting with fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2 not found: ID does not exist" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642999 4942 scope.go:117] "RemoveContainer" containerID="ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2" Feb 18 19:42:58 crc kubenswrapper[4942]: E0218 19:42:58.643279 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2\": container with ID starting with ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2 not found: ID does not exist" containerID="ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.643329 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2"} err="failed to get container status \"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2\": rpc error: code = NotFound desc = could not find container \"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2\": container with ID starting with ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2 not found: ID does not exist" Feb 18 19:42:59 crc kubenswrapper[4942]: I0218 19:42:59.046460 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" path="/var/lib/kubelet/pods/e8d8c3f0-9548-40e6-8504-24d707672276/volumes" Feb 18 19:43:17 crc kubenswrapper[4942]: I0218 19:43:17.721473 4942 generic.go:334] "Generic (PLEG): container finished" podID="8509499d-4716-44d6-8fb9-539350f38310" containerID="2739ca309f28eed92bcb31bf2162f38ddc41005b410bf64c078594e3f919f697" exitCode=0 Feb 18 19:43:17 crc kubenswrapper[4942]: I0218 19:43:17.721628 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" event={"ID":"8509499d-4716-44d6-8fb9-539350f38310","Type":"ContainerDied","Data":"2739ca309f28eed92bcb31bf2162f38ddc41005b410bf64c078594e3f919f697"} Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.158981 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.290889 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") pod \"8509499d-4716-44d6-8fb9-539350f38310\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.291251 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") pod \"8509499d-4716-44d6-8fb9-539350f38310\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.291289 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") pod \"8509499d-4716-44d6-8fb9-539350f38310\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.291452 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") pod \"8509499d-4716-44d6-8fb9-539350f38310\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.296836 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8509499d-4716-44d6-8fb9-539350f38310" (UID: "8509499d-4716-44d6-8fb9-539350f38310"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.298030 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb" (OuterVolumeSpecName: "kube-api-access-jtdhb") pod "8509499d-4716-44d6-8fb9-539350f38310" (UID: "8509499d-4716-44d6-8fb9-539350f38310"). InnerVolumeSpecName "kube-api-access-jtdhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.334753 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8509499d-4716-44d6-8fb9-539350f38310" (UID: "8509499d-4716-44d6-8fb9-539350f38310"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.341073 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory" (OuterVolumeSpecName: "inventory") pod "8509499d-4716-44d6-8fb9-539350f38310" (UID: "8509499d-4716-44d6-8fb9-539350f38310"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.394587 4942 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.394616 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.394626 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.394635 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.741629 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" event={"ID":"8509499d-4716-44d6-8fb9-539350f38310","Type":"ContainerDied","Data":"fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8"} Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.741675 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.741670 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.830797 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq"] Feb 18 19:43:19 crc kubenswrapper[4942]: E0218 19:43:19.831238 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="extract-utilities" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831261 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="extract-utilities" Feb 18 19:43:19 crc kubenswrapper[4942]: E0218 19:43:19.831270 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="registry-server" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831276 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="registry-server" Feb 18 19:43:19 crc kubenswrapper[4942]: E0218 19:43:19.831284 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="extract-content" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831291 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="extract-content" Feb 18 19:43:19 crc kubenswrapper[4942]: E0218 19:43:19.831330 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509499d-4716-44d6-8fb9-539350f38310" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831338 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509499d-4716-44d6-8fb9-539350f38310" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831548 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509499d-4716-44d6-8fb9-539350f38310" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831569 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="registry-server" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.832221 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.835249 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.835482 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.835746 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.835914 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.844587 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq"] Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.903885 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.903969 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.904000 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.006172 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.006273 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.006304 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.011202 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.011739 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.026189 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.195978 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.729206 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq"] Feb 18 19:43:20 crc kubenswrapper[4942]: W0218 19:43:20.737860 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3125f54_a594_4c20_ab3f_298cd68f3709.slice/crio-676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6 WatchSource:0}: Error finding container 676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6: Status 404 returned error can't find the container with id 676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6 Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.753646 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" event={"ID":"b3125f54-a594-4c20-ab3f-298cd68f3709","Type":"ContainerStarted","Data":"676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6"} Feb 18 19:43:21 crc kubenswrapper[4942]: I0218 19:43:21.248342 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:43:21 crc kubenswrapper[4942]: I0218 19:43:21.763957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" event={"ID":"b3125f54-a594-4c20-ab3f-298cd68f3709","Type":"ContainerStarted","Data":"ac85b185c9e8dc808f91626080d82ac680145476d477bab1b8677a51e222d00a"} Feb 18 19:43:21 crc kubenswrapper[4942]: I0218 19:43:21.786477 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" podStartSLOduration=2.280901369 podStartE2EDuration="2.786458625s" podCreationTimestamp="2026-02-18 19:43:19 +0000 UTC" firstStartedPulling="2026-02-18 19:43:20.739716091 +0000 UTC m=+1560.444648756" lastFinishedPulling="2026-02-18 19:43:21.245273347 +0000 UTC m=+1560.950206012" observedRunningTime="2026-02-18 19:43:21.781995237 +0000 UTC m=+1561.486927912" watchObservedRunningTime="2026-02-18 19:43:21.786458625 +0000 UTC m=+1561.491391290" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.366091 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.373257 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.398421 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.494149 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.494973 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.495050 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598034 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598148 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598536 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598675 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.622466 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.702294 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:25 crc kubenswrapper[4942]: I0218 19:43:25.210183 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:25 crc kubenswrapper[4942]: I0218 19:43:25.800674 4942 generic.go:334] "Generic (PLEG): container finished" podID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerID="a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67" exitCode=0 Feb 18 19:43:25 crc kubenswrapper[4942]: I0218 19:43:25.800748 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerDied","Data":"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67"} Feb 18 19:43:25 crc kubenswrapper[4942]: I0218 19:43:25.801167 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerStarted","Data":"b91dfcc0a55e7f78bdfd59306beaed39399ca278506b90a4cd7e3ad2eb529cb3"} Feb 18 19:43:26 crc kubenswrapper[4942]: I0218 19:43:26.811489 4942 generic.go:334] "Generic (PLEG): container finished" podID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerID="32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0" exitCode=0 Feb 18 19:43:26 crc kubenswrapper[4942]: I0218 19:43:26.811604 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerDied","Data":"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0"} Feb 18 19:43:27 crc kubenswrapper[4942]: I0218 19:43:27.825123 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerStarted","Data":"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835"} Feb 18 19:43:27 crc kubenswrapper[4942]: I0218 19:43:27.853124 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rnvb2" podStartSLOduration=2.412958847 podStartE2EDuration="3.853102156s" podCreationTimestamp="2026-02-18 19:43:24 +0000 UTC" firstStartedPulling="2026-02-18 19:43:25.80233476 +0000 UTC m=+1565.507267435" lastFinishedPulling="2026-02-18 19:43:27.242478069 +0000 UTC m=+1566.947410744" observedRunningTime="2026-02-18 19:43:27.84193699 +0000 UTC m=+1567.546869735" watchObservedRunningTime="2026-02-18 19:43:27.853102156 +0000 UTC m=+1567.558034821" Feb 18 19:43:34 crc kubenswrapper[4942]: I0218 19:43:34.703464 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:34 crc kubenswrapper[4942]: I0218 19:43:34.704066 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:34 crc kubenswrapper[4942]: I0218 19:43:34.748878 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:34 crc kubenswrapper[4942]: I0218 19:43:34.996889 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:35 crc kubenswrapper[4942]: I0218 19:43:35.047063 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:36 crc kubenswrapper[4942]: I0218 19:43:36.940106 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rnvb2" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="registry-server" containerID="cri-o://406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" gracePeriod=2 Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.467219 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.496334 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") pod \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.496559 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") pod \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.496581 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") pod \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.497625 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities" (OuterVolumeSpecName: "utilities") pod "20cf93d8-a0c4-4855-9a18-b8e1ea19e417" (UID: "20cf93d8-a0c4-4855-9a18-b8e1ea19e417"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.505325 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5" (OuterVolumeSpecName: "kube-api-access-7bnt5") pod "20cf93d8-a0c4-4855-9a18-b8e1ea19e417" (UID: "20cf93d8-a0c4-4855-9a18-b8e1ea19e417"). InnerVolumeSpecName "kube-api-access-7bnt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.541615 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20cf93d8-a0c4-4855-9a18-b8e1ea19e417" (UID: "20cf93d8-a0c4-4855-9a18-b8e1ea19e417"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.598241 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.598274 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.598288 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.953900 4942 generic.go:334] "Generic (PLEG): container finished" podID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerID="406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" exitCode=0 Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.953948 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerDied","Data":"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835"} Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.953984 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerDied","Data":"b91dfcc0a55e7f78bdfd59306beaed39399ca278506b90a4cd7e3ad2eb529cb3"} Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.953962 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.954035 4942 scope.go:117] "RemoveContainer" containerID="406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.978287 4942 scope.go:117] "RemoveContainer" containerID="32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.000632 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.012252 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.056728 4942 scope.go:117] "RemoveContainer" containerID="a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.081315 4942 scope.go:117] "RemoveContainer" containerID="406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" Feb 18 19:43:38 crc kubenswrapper[4942]: E0218 19:43:38.081791 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835\": container with ID starting with 406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835 not found: ID does not exist" containerID="406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.081828 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835"} err="failed to get container status \"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835\": rpc error: code = NotFound desc = could not find container \"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835\": container with ID starting with 406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835 not found: ID does not exist" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.081854 4942 scope.go:117] "RemoveContainer" containerID="32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0" Feb 18 19:43:38 crc kubenswrapper[4942]: E0218 19:43:38.082191 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0\": container with ID starting with 32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0 not found: ID does not exist" containerID="32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.082220 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0"} err="failed to get container status \"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0\": rpc error: code = NotFound desc = could not find container \"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0\": container with ID starting with 32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0 not found: ID does not exist" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.082237 4942 scope.go:117] "RemoveContainer" containerID="a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67" Feb 18 19:43:38 crc kubenswrapper[4942]: E0218 19:43:38.082478 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67\": container with ID starting with a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67 not found: ID does not exist" containerID="a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.082501 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67"} err="failed to get container status \"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67\": rpc error: code = NotFound desc = could not find container \"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67\": container with ID starting with a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67 not found: ID does not exist" Feb 18 19:43:39 crc kubenswrapper[4942]: I0218 19:43:39.057542 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" path="/var/lib/kubelet/pods/20cf93d8-a0c4-4855-9a18-b8e1ea19e417/volumes" Feb 18 19:43:47 crc kubenswrapper[4942]: I0218 19:43:47.196180 4942 scope.go:117] "RemoveContainer" containerID="a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31" Feb 18 19:43:47 crc kubenswrapper[4942]: I0218 19:43:47.227736 4942 scope.go:117] "RemoveContainer" containerID="4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff" Feb 18 19:43:47 crc kubenswrapper[4942]: I0218 19:43:47.256135 4942 scope.go:117] "RemoveContainer" containerID="5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752" Feb 18 19:43:55 crc kubenswrapper[4942]: I0218 19:43:55.056429 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:43:55 crc kubenswrapper[4942]: I0218 19:43:55.057067 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:43:56 crc kubenswrapper[4942]: I0218 19:43:56.064714 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:43:56 crc kubenswrapper[4942]: I0218 19:43:56.076431 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.048083 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646ba630-1210-431d-8902-b5c0968b35bb" path="/var/lib/kubelet/pods/646ba630-1210-431d-8902-b5c0968b35bb/volumes" Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.049542 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" path="/var/lib/kubelet/pods/ba056ec7-86a5-43b6-aebd-a22b21843cc3/volumes" Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.051662 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.062680 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.071222 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.079087 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.090197 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.099195 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.107459 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.115745 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:43:59 crc kubenswrapper[4942]: I0218 19:43:59.049325 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f4f7b72-968a-4aed-b6e9-87f43677f342" path="/var/lib/kubelet/pods/2f4f7b72-968a-4aed-b6e9-87f43677f342/volumes" Feb 18 19:43:59 crc kubenswrapper[4942]: I0218 19:43:59.051387 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" path="/var/lib/kubelet/pods/371430b6-c9b6-48ba-a1a7-d1ce72a001ec/volumes" Feb 18 19:43:59 crc kubenswrapper[4942]: I0218 19:43:59.052697 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6821c713-6163-44f5-a749-415f0c1d8337" path="/var/lib/kubelet/pods/6821c713-6163-44f5-a749-415f0c1d8337/volumes" Feb 18 19:43:59 crc kubenswrapper[4942]: I0218 19:43:59.053551 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3564c8a-5e18-4c53-b225-7e9baf41a371" path="/var/lib/kubelet/pods/a3564c8a-5e18-4c53-b225-7e9baf41a371/volumes" Feb 18 19:44:04 crc kubenswrapper[4942]: I0218 19:44:04.049957 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:44:04 crc kubenswrapper[4942]: I0218 19:44:04.063510 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:44:04 crc kubenswrapper[4942]: I0218 19:44:04.076821 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:44:04 crc kubenswrapper[4942]: I0218 19:44:04.088057 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:44:05 crc kubenswrapper[4942]: I0218 19:44:05.057080 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1ca129-f896-4d68-b119-701a991fe0ba" path="/var/lib/kubelet/pods/6a1ca129-f896-4d68-b119-701a991fe0ba/volumes" Feb 18 19:44:05 crc kubenswrapper[4942]: I0218 19:44:05.058621 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" path="/var/lib/kubelet/pods/8611c14f-da0c-410e-9c3a-dc6cb5a698a7/volumes" Feb 18 19:44:23 crc kubenswrapper[4942]: I0218 19:44:23.741470 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:44:23 crc kubenswrapper[4942]: I0218 19:44:23.742158 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.064290 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.081509 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.094696 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.104108 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.113523 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.122670 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.129989 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.137068 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.145155 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.152955 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.057669 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" path="/var/lib/kubelet/pods/35dbdf24-b5f9-4a19-96f9-1fe390df90e1/volumes" Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.059670 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" path="/var/lib/kubelet/pods/4edc6296-1ba6-43f7-a076-93f94c77a2c9/volumes" Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.062023 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" path="/var/lib/kubelet/pods/9a8e424f-44a5-4eaa-9f3f-882f070aa404/volumes" Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.063400 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dddbc305-d881-4ef9-ada1-49e8f180162c" path="/var/lib/kubelet/pods/dddbc305-d881-4ef9-ada1-49e8f180162c/volumes" Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.065932 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcea68e2-0d37-4812-a7ad-403e59b7b556" path="/var/lib/kubelet/pods/fcea68e2-0d37-4812-a7ad-403e59b7b556/volumes" Feb 18 19:44:31 crc kubenswrapper[4942]: I0218 19:44:31.062615 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:44:31 crc kubenswrapper[4942]: I0218 19:44:31.063884 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:44:31 crc kubenswrapper[4942]: I0218 19:44:31.088321 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:44:31 crc kubenswrapper[4942]: I0218 19:44:31.100217 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:44:33 crc kubenswrapper[4942]: I0218 19:44:33.048644 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c903d652-2880-43bd-9445-f1b03764f413" path="/var/lib/kubelet/pods/c903d652-2880-43bd-9445-f1b03764f413/volumes" Feb 18 19:44:33 crc kubenswrapper[4942]: I0218 19:44:33.050357 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" path="/var/lib/kubelet/pods/fd491cd9-f58f-4821-8004-a5a4762d6bdb/volumes" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.380410 4942 scope.go:117] "RemoveContainer" containerID="7973de763d55a77ffbc3e3d1001daee7ca68a526d4309188caa67a4ce4135e55" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.447366 4942 scope.go:117] "RemoveContainer" containerID="727dde1e275a9b0b467f516dab63cba62b27e6168562e7bbd076fe7b30b2869f" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.506946 4942 scope.go:117] "RemoveContainer" containerID="837718ff91cb054c2e7fe10e6239bf44f02d0dd7d7855db97e09e837f3dcef65" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.560820 4942 scope.go:117] "RemoveContainer" containerID="549770ba7dc9b2efdf1b7dbd1827ec366b9e1e693aeec0f1a695091cdbeda9bc" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.617825 4942 scope.go:117] "RemoveContainer" containerID="761092c069dfd66382418fe07bf3c15f0aee53ccbdf6b11196e33385aae3fc8b" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.655528 4942 scope.go:117] "RemoveContainer" containerID="b1d49648de6b3a759e8404975f38b8d6b28e2ed6cf3c88b12649b6a3fed64a43" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.708142 4942 scope.go:117] "RemoveContainer" containerID="c942add3a433a64faf7638403a168e22e7b5e2f26ceaa17e1731c6044072942d" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.739699 4942 scope.go:117] "RemoveContainer" containerID="0e02d4fe73a4e293f62bf869926c2629a47060f29d5a8a14b093d650895a851c" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.770491 4942 scope.go:117] "RemoveContainer" containerID="4e49158c977b69109020d9375918418b28e7f6670849fc1495f27f4bb36f8420" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.793814 4942 scope.go:117] "RemoveContainer" containerID="376d0fc77c68f0c59dee539c15e1e9f915935989d2e259a07dc205d03784efe9" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.833657 4942 scope.go:117] "RemoveContainer" containerID="f3ac5111bbb6bd92f96a1d8bfbfe931ddce997416181ddc95500cf9c11a42867" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.857972 4942 scope.go:117] "RemoveContainer" containerID="811ec8cee78f943aac4bbfb29b95ea4e9d51e51453fc9da48c7eabb6372bfb2b" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.885286 4942 scope.go:117] "RemoveContainer" containerID="e0735df4037c9d26aa2f69d57c8e775cb7c18bc1fdb68127c0b914f822f83bec" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.914440 4942 scope.go:117] "RemoveContainer" containerID="a8c3861121c5594ca501846681ea609d414d4c26e10e1b891f8ff728174138b2" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.935039 4942 scope.go:117] "RemoveContainer" containerID="55245bf67e01b4a9996ff8822e688651e94d412e130a306f9914243a723acae1" Feb 18 19:44:49 crc kubenswrapper[4942]: I0218 19:44:49.057382 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:44:49 crc kubenswrapper[4942]: I0218 19:44:49.062383 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:44:49 crc kubenswrapper[4942]: I0218 19:44:49.770918 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3125f54-a594-4c20-ab3f-298cd68f3709" containerID="ac85b185c9e8dc808f91626080d82ac680145476d477bab1b8677a51e222d00a" exitCode=0 Feb 18 19:44:49 crc kubenswrapper[4942]: I0218 19:44:49.770974 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" event={"ID":"b3125f54-a594-4c20-ab3f-298cd68f3709","Type":"ContainerDied","Data":"ac85b185c9e8dc808f91626080d82ac680145476d477bab1b8677a51e222d00a"} Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.058819 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" path="/var/lib/kubelet/pods/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5/volumes" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.246947 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.313234 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") pod \"b3125f54-a594-4c20-ab3f-298cd68f3709\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.313328 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") pod \"b3125f54-a594-4c20-ab3f-298cd68f3709\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.313425 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") pod \"b3125f54-a594-4c20-ab3f-298cd68f3709\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.327962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6" (OuterVolumeSpecName: "kube-api-access-v9rz6") pod "b3125f54-a594-4c20-ab3f-298cd68f3709" (UID: "b3125f54-a594-4c20-ab3f-298cd68f3709"). InnerVolumeSpecName "kube-api-access-v9rz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.351063 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory" (OuterVolumeSpecName: "inventory") pod "b3125f54-a594-4c20-ab3f-298cd68f3709" (UID: "b3125f54-a594-4c20-ab3f-298cd68f3709"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.354547 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3125f54-a594-4c20-ab3f-298cd68f3709" (UID: "b3125f54-a594-4c20-ab3f-298cd68f3709"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.415145 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.415215 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.415235 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.790606 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" event={"ID":"b3125f54-a594-4c20-ab3f-298cd68f3709","Type":"ContainerDied","Data":"676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6"} Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.790647 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.790658 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887259 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9"] Feb 18 19:44:51 crc kubenswrapper[4942]: E0218 19:44:51.887701 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="extract-content" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887719 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="extract-content" Feb 18 19:44:51 crc kubenswrapper[4942]: E0218 19:44:51.887752 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3125f54-a594-4c20-ab3f-298cd68f3709" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887776 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3125f54-a594-4c20-ab3f-298cd68f3709" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 19:44:51 crc kubenswrapper[4942]: E0218 19:44:51.887787 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="registry-server" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887793 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="registry-server" Feb 18 19:44:51 crc kubenswrapper[4942]: E0218 19:44:51.887809 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="extract-utilities" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887816 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="extract-utilities" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.888044 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3125f54-a594-4c20-ab3f-298cd68f3709" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.888055 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="registry-server" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.890983 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.895511 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.897048 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.907066 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9"] Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.909572 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.909803 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.027968 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.028094 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.028255 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.130427 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.130640 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.130713 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.136440 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.137956 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.159627 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.210899 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.769259 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9"] Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.805751 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" event={"ID":"abda062b-22ed-4d21-adbb-f2b906e36e02","Type":"ContainerStarted","Data":"63f65f48ee5ab0059c48c01f1282e83a7fc007edbf7a2254b1c98d6e5aa16551"} Feb 18 19:44:53 crc kubenswrapper[4942]: I0218 19:44:53.741582 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:44:53 crc kubenswrapper[4942]: I0218 19:44:53.741665 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:44:53 crc kubenswrapper[4942]: I0218 19:44:53.817221 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" event={"ID":"abda062b-22ed-4d21-adbb-f2b906e36e02","Type":"ContainerStarted","Data":"56db9471b0cd01eb8b5e1e757306ee35e2890630e80601f66b36dbc89054a34f"} Feb 18 19:44:53 crc kubenswrapper[4942]: I0218 19:44:53.836875 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" podStartSLOduration=2.40061548 podStartE2EDuration="2.836845419s" podCreationTimestamp="2026-02-18 19:44:51 +0000 UTC" firstStartedPulling="2026-02-18 19:44:52.774580705 +0000 UTC m=+1652.479513400" lastFinishedPulling="2026-02-18 19:44:53.210810664 +0000 UTC m=+1652.915743339" observedRunningTime="2026-02-18 19:44:53.832191376 +0000 UTC m=+1653.537124081" watchObservedRunningTime="2026-02-18 19:44:53.836845419 +0000 UTC m=+1653.541778094" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.146218 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.147834 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.151313 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.152121 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.170158 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.311363 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.311698 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.311857 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.413699 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.413852 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.413880 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.414601 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.423466 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.438855 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.479668 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.956737 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 19:45:00 crc kubenswrapper[4942]: W0218 19:45:00.970873 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf02d65f2_f70f_4982_a9d5_fc9d75091181.slice/crio-f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301 WatchSource:0}: Error finding container f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301: Status 404 returned error can't find the container with id f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301 Feb 18 19:45:01 crc kubenswrapper[4942]: I0218 19:45:01.916048 4942 generic.go:334] "Generic (PLEG): container finished" podID="f02d65f2-f70f-4982-a9d5-fc9d75091181" containerID="bdd33fc87e63584fee347049c15193b1ff470c22181f3d250c7e0de28ba81fd9" exitCode=0 Feb 18 19:45:01 crc kubenswrapper[4942]: I0218 19:45:01.916168 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" event={"ID":"f02d65f2-f70f-4982-a9d5-fc9d75091181","Type":"ContainerDied","Data":"bdd33fc87e63584fee347049c15193b1ff470c22181f3d250c7e0de28ba81fd9"} Feb 18 19:45:01 crc kubenswrapper[4942]: I0218 19:45:01.917600 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" event={"ID":"f02d65f2-f70f-4982-a9d5-fc9d75091181","Type":"ContainerStarted","Data":"f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301"} Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.096366 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.106267 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.313222 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.475881 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") pod \"f02d65f2-f70f-4982-a9d5-fc9d75091181\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.476285 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") pod \"f02d65f2-f70f-4982-a9d5-fc9d75091181\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.476315 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") pod \"f02d65f2-f70f-4982-a9d5-fc9d75091181\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.476510 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume" (OuterVolumeSpecName: "config-volume") pod "f02d65f2-f70f-4982-a9d5-fc9d75091181" (UID: "f02d65f2-f70f-4982-a9d5-fc9d75091181"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.476878 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.482526 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f02d65f2-f70f-4982-a9d5-fc9d75091181" (UID: "f02d65f2-f70f-4982-a9d5-fc9d75091181"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.483951 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz" (OuterVolumeSpecName: "kube-api-access-hnzqz") pod "f02d65f2-f70f-4982-a9d5-fc9d75091181" (UID: "f02d65f2-f70f-4982-a9d5-fc9d75091181"). InnerVolumeSpecName "kube-api-access-hnzqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.578689 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.578734 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.937170 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" event={"ID":"f02d65f2-f70f-4982-a9d5-fc9d75091181","Type":"ContainerDied","Data":"f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301"} Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.937495 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.937462 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:05 crc kubenswrapper[4942]: I0218 19:45:05.047858 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" path="/var/lib/kubelet/pods/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3/volumes" Feb 18 19:45:20 crc kubenswrapper[4942]: I0218 19:45:20.036200 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:45:20 crc kubenswrapper[4942]: I0218 19:45:20.046658 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:45:21 crc kubenswrapper[4942]: I0218 19:45:21.071549 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" path="/var/lib/kubelet/pods/a6c912f7-7ee8-4f53-a358-a6a6a5088be5/volumes" Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.741481 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.742032 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.742124 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.743395 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.743532 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" gracePeriod=600 Feb 18 19:45:23 crc kubenswrapper[4942]: E0218 19:45:23.880475 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:45:24 crc kubenswrapper[4942]: I0218 19:45:24.181694 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" exitCode=0 Feb 18 19:45:24 crc kubenswrapper[4942]: I0218 19:45:24.181797 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19"} Feb 18 19:45:24 crc kubenswrapper[4942]: I0218 19:45:24.181903 4942 scope.go:117] "RemoveContainer" containerID="0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf" Feb 18 19:45:24 crc kubenswrapper[4942]: I0218 19:45:24.183087 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:45:24 crc kubenswrapper[4942]: E0218 19:45:24.184089 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:45:26 crc kubenswrapper[4942]: I0218 19:45:26.032789 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:45:26 crc kubenswrapper[4942]: I0218 19:45:26.040529 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:45:27 crc kubenswrapper[4942]: I0218 19:45:27.045200 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8e769c-00c3-41a1-97c4-d91902767dfe" path="/var/lib/kubelet/pods/af8e769c-00c3-41a1-97c4-d91902767dfe/volumes" Feb 18 19:45:28 crc kubenswrapper[4942]: I0218 19:45:28.040832 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:45:28 crc kubenswrapper[4942]: I0218 19:45:28.056104 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:45:29 crc kubenswrapper[4942]: I0218 19:45:29.048281 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" path="/var/lib/kubelet/pods/983d5293-8413-4a29-88b2-ba775b3b4a8b/volumes" Feb 18 19:45:30 crc kubenswrapper[4942]: I0218 19:45:30.029448 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:45:30 crc kubenswrapper[4942]: I0218 19:45:30.039285 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:45:31 crc kubenswrapper[4942]: I0218 19:45:31.085248 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" path="/var/lib/kubelet/pods/f29ae8a1-b3cc-452c-ac99-b450ef3125d8/volumes" Feb 18 19:45:37 crc kubenswrapper[4942]: I0218 19:45:37.037748 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:45:37 crc kubenswrapper[4942]: E0218 19:45:37.038718 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.036652 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:45:48 crc kubenswrapper[4942]: E0218 19:45:48.037388 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.271057 4942 scope.go:117] "RemoveContainer" containerID="a5a266a5f35f400b4926f114a4e397e8de76de3f56a176f14c64d1b553d123f4" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.306338 4942 scope.go:117] "RemoveContainer" containerID="1f69a1fd29ab925cd8cf8e9aff116531b62f274c86f6998747eb096250393ed9" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.366493 4942 scope.go:117] "RemoveContainer" containerID="8c6545f8eaa3b666b06d888c16ee9caa900adcec0bcd683e72e4f96180bd297d" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.422025 4942 scope.go:117] "RemoveContainer" containerID="96103ab065d78416959c1d84cf5d96a95a67496c5bf29a0bff2dd2c96318a211" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.483205 4942 scope.go:117] "RemoveContainer" containerID="16fd17087ed9bd06ba590a2897d1853b93c4e9cb882e3c311955fd4cf453c84b" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.537884 4942 scope.go:117] "RemoveContainer" containerID="373bd2d7e6e62cf5defbed6522169de2de3264581e7024f113223b1465d241c5" Feb 18 19:45:50 crc kubenswrapper[4942]: I0218 19:45:50.032510 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:45:50 crc kubenswrapper[4942]: I0218 19:45:50.040178 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:45:50 crc kubenswrapper[4942]: I0218 19:45:50.046808 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:45:50 crc kubenswrapper[4942]: I0218 19:45:50.053436 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:45:51 crc kubenswrapper[4942]: I0218 19:45:51.053378 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" path="/var/lib/kubelet/pods/8aeac097-ba93-4859-a14f-839ae1421e28/volumes" Feb 18 19:45:51 crc kubenswrapper[4942]: I0218 19:45:51.054296 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" path="/var/lib/kubelet/pods/8db7f68b-a733-44fc-90b9-a1dd489fb42d/volumes" Feb 18 19:46:02 crc kubenswrapper[4942]: I0218 19:46:02.036190 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:02 crc kubenswrapper[4942]: E0218 19:46:02.038661 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:03 crc kubenswrapper[4942]: I0218 19:46:03.635365 4942 generic.go:334] "Generic (PLEG): container finished" podID="abda062b-22ed-4d21-adbb-f2b906e36e02" containerID="56db9471b0cd01eb8b5e1e757306ee35e2890630e80601f66b36dbc89054a34f" exitCode=0 Feb 18 19:46:03 crc kubenswrapper[4942]: I0218 19:46:03.635501 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" event={"ID":"abda062b-22ed-4d21-adbb-f2b906e36e02","Type":"ContainerDied","Data":"56db9471b0cd01eb8b5e1e757306ee35e2890630e80601f66b36dbc89054a34f"} Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.151550 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.293811 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") pod \"abda062b-22ed-4d21-adbb-f2b906e36e02\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.294068 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") pod \"abda062b-22ed-4d21-adbb-f2b906e36e02\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.294213 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") pod \"abda062b-22ed-4d21-adbb-f2b906e36e02\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.304121 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945" (OuterVolumeSpecName: "kube-api-access-fx945") pod "abda062b-22ed-4d21-adbb-f2b906e36e02" (UID: "abda062b-22ed-4d21-adbb-f2b906e36e02"). InnerVolumeSpecName "kube-api-access-fx945". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.396583 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.794780 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl"] Feb 18 19:46:05 crc kubenswrapper[4942]: E0218 19:46:05.795213 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02d65f2-f70f-4982-a9d5-fc9d75091181" containerName="collect-profiles" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.795247 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02d65f2-f70f-4982-a9d5-fc9d75091181" containerName="collect-profiles" Feb 18 19:46:05 crc kubenswrapper[4942]: E0218 19:46:05.795296 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abda062b-22ed-4d21-adbb-f2b906e36e02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.795306 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="abda062b-22ed-4d21-adbb-f2b906e36e02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.795497 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="abda062b-22ed-4d21-adbb-f2b906e36e02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.795526 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f02d65f2-f70f-4982-a9d5-fc9d75091181" containerName="collect-profiles" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.796272 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.814079 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl"] Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.906314 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.906658 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.906780 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.009358 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.009623 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.009700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.017354 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.023023 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.060904 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.119841 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.705474 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory" (OuterVolumeSpecName: "inventory") pod "abda062b-22ed-4d21-adbb-f2b906e36e02" (UID: "abda062b-22ed-4d21-adbb-f2b906e36e02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.715441 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "abda062b-22ed-4d21-adbb-f2b906e36e02" (UID: "abda062b-22ed-4d21-adbb-f2b906e36e02"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.728508 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.728593 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.805177 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" event={"ID":"abda062b-22ed-4d21-adbb-f2b906e36e02","Type":"ContainerDied","Data":"63f65f48ee5ab0059c48c01f1282e83a7fc007edbf7a2254b1c98d6e5aa16551"} Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.805214 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f65f48ee5ab0059c48c01f1282e83a7fc007edbf7a2254b1c98d6e5aa16551" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.805271 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:46:07 crc kubenswrapper[4942]: I0218 19:46:07.544254 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl"] Feb 18 19:46:07 crc kubenswrapper[4942]: I0218 19:46:07.813283 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" event={"ID":"57203330-4497-4588-ac58-2cff41481077","Type":"ContainerStarted","Data":"007cb05fac2a4c3ca4ff7d6370c98c9994e395dd0aafe8650d080837b9475653"} Feb 18 19:46:08 crc kubenswrapper[4942]: I0218 19:46:08.830475 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" event={"ID":"57203330-4497-4588-ac58-2cff41481077","Type":"ContainerStarted","Data":"2639b93e0de7223af939065d46c5fbb2b93ee2789e4195514a63bc70033c0a67"} Feb 18 19:46:08 crc kubenswrapper[4942]: I0218 19:46:08.855110 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" podStartSLOduration=3.144278467 podStartE2EDuration="3.855094849s" podCreationTimestamp="2026-02-18 19:46:05 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.556069052 +0000 UTC m=+1727.261001717" lastFinishedPulling="2026-02-18 19:46:08.266885424 +0000 UTC m=+1727.971818099" observedRunningTime="2026-02-18 19:46:08.848603588 +0000 UTC m=+1728.553536283" watchObservedRunningTime="2026-02-18 19:46:08.855094849 +0000 UTC m=+1728.560027514" Feb 18 19:46:13 crc kubenswrapper[4942]: I0218 19:46:13.897701 4942 generic.go:334] "Generic (PLEG): container finished" podID="57203330-4497-4588-ac58-2cff41481077" containerID="2639b93e0de7223af939065d46c5fbb2b93ee2789e4195514a63bc70033c0a67" exitCode=0 Feb 18 19:46:13 crc kubenswrapper[4942]: I0218 19:46:13.897825 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" event={"ID":"57203330-4497-4588-ac58-2cff41481077","Type":"ContainerDied","Data":"2639b93e0de7223af939065d46c5fbb2b93ee2789e4195514a63bc70033c0a67"} Feb 18 19:46:14 crc kubenswrapper[4942]: I0218 19:46:14.035646 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:14 crc kubenswrapper[4942]: E0218 19:46:14.035913 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.396726 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.482516 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") pod \"57203330-4497-4588-ac58-2cff41481077\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.482623 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") pod \"57203330-4497-4588-ac58-2cff41481077\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.482662 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") pod \"57203330-4497-4588-ac58-2cff41481077\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.497931 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8" (OuterVolumeSpecName: "kube-api-access-j8gq8") pod "57203330-4497-4588-ac58-2cff41481077" (UID: "57203330-4497-4588-ac58-2cff41481077"). InnerVolumeSpecName "kube-api-access-j8gq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.518030 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory" (OuterVolumeSpecName: "inventory") pod "57203330-4497-4588-ac58-2cff41481077" (UID: "57203330-4497-4588-ac58-2cff41481077"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.522055 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "57203330-4497-4588-ac58-2cff41481077" (UID: "57203330-4497-4588-ac58-2cff41481077"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.585809 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.585849 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.585864 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.922587 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" event={"ID":"57203330-4497-4588-ac58-2cff41481077","Type":"ContainerDied","Data":"007cb05fac2a4c3ca4ff7d6370c98c9994e395dd0aafe8650d080837b9475653"} Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.923067 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="007cb05fac2a4c3ca4ff7d6370c98c9994e395dd0aafe8650d080837b9475653" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.922691 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.019147 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6"] Feb 18 19:46:16 crc kubenswrapper[4942]: E0218 19:46:16.019993 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57203330-4497-4588-ac58-2cff41481077" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.020110 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="57203330-4497-4588-ac58-2cff41481077" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.020435 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="57203330-4497-4588-ac58-2cff41481077" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.021452 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.023915 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.024187 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.028273 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.043008 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.046718 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6"] Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.197983 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.198312 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.198501 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.300242 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.300316 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.300431 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.305699 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.306318 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.323503 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.351210 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.970123 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6"] Feb 18 19:46:17 crc kubenswrapper[4942]: I0218 19:46:17.947869 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" event={"ID":"59349fa4-b215-47f3-93a7-7e9aca054947","Type":"ContainerStarted","Data":"865f01125c6391deacd831979ae4a148f4a3a2136ebe5b39793d52d94a72dbb3"} Feb 18 19:46:17 crc kubenswrapper[4942]: I0218 19:46:17.948613 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" event={"ID":"59349fa4-b215-47f3-93a7-7e9aca054947","Type":"ContainerStarted","Data":"49e8b3eead57c998bc9134855853dc809162614b2ed5c9cea7ed27fa70db4276"} Feb 18 19:46:17 crc kubenswrapper[4942]: I0218 19:46:17.974309 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" podStartSLOduration=2.572395549 podStartE2EDuration="2.974272563s" podCreationTimestamp="2026-02-18 19:46:15 +0000 UTC" firstStartedPulling="2026-02-18 19:46:16.976391948 +0000 UTC m=+1736.681324613" lastFinishedPulling="2026-02-18 19:46:17.378268922 +0000 UTC m=+1737.083201627" observedRunningTime="2026-02-18 19:46:17.963920719 +0000 UTC m=+1737.668853464" watchObservedRunningTime="2026-02-18 19:46:17.974272563 +0000 UTC m=+1737.679205268" Feb 18 19:46:26 crc kubenswrapper[4942]: I0218 19:46:26.037581 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:26 crc kubenswrapper[4942]: E0218 19:46:26.039107 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:27 crc kubenswrapper[4942]: I0218 19:46:27.051035 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:46:27 crc kubenswrapper[4942]: I0218 19:46:27.058284 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.078512 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" path="/var/lib/kubelet/pods/54e11ed4-f85e-4125-acc8-b0b86cef91fb/volumes" Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.080189 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.080224 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.081481 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.099387 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.110793 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.118491 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:46:30 crc kubenswrapper[4942]: I0218 19:46:30.036709 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:46:30 crc kubenswrapper[4942]: I0218 19:46:30.047164 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:46:30 crc kubenswrapper[4942]: I0218 19:46:30.058088 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:46:30 crc kubenswrapper[4942]: I0218 19:46:30.065704 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.054049 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3319773b-d924-402a-adbd-f421ee14c994" path="/var/lib/kubelet/pods/3319773b-d924-402a-adbd-f421ee14c994/volumes" Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.055415 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908017b2-bbca-42f2-b6a0-af358a18d1b7" path="/var/lib/kubelet/pods/908017b2-bbca-42f2-b6a0-af358a18d1b7/volumes" Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.056336 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" path="/var/lib/kubelet/pods/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27/volumes" Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.057784 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de103e96-857c-4fa9-b78b-51c8f4734643" path="/var/lib/kubelet/pods/de103e96-857c-4fa9-b78b-51c8f4734643/volumes" Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.059469 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4ca914-d763-484f-aa35-39dbd725d14c" path="/var/lib/kubelet/pods/ef4ca914-d763-484f-aa35-39dbd725d14c/volumes" Feb 18 19:46:38 crc kubenswrapper[4942]: I0218 19:46:38.036907 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:38 crc kubenswrapper[4942]: E0218 19:46:38.037706 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.667008 4942 scope.go:117] "RemoveContainer" containerID="c2c74965083b09d2fda5c205fdee24ab8d991088f20cd6c4fd29973dbc9a7c39" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.698223 4942 scope.go:117] "RemoveContainer" containerID="12651ed44c362c43a5a615685457fd590c1593f4afa3ac50fda9dea54a2e1f71" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.776182 4942 scope.go:117] "RemoveContainer" containerID="866788c6c2a051f7476fcb5d58fd9c13e62810bec69e94d021b4616590e98f0b" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.821140 4942 scope.go:117] "RemoveContainer" containerID="a09c56da144b09bdcb7865a7cc27a2ff95e7937bd4f16a766144008dd1c49144" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.884061 4942 scope.go:117] "RemoveContainer" containerID="4ee086e7e747f10b7d38270d86480864775d35a33a827da89168941ff41e3484" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.922376 4942 scope.go:117] "RemoveContainer" containerID="9f2c359e5e4f7ba110dc92287a82c170423f21670d64c2a6b420aa0beff96ce3" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.957364 4942 scope.go:117] "RemoveContainer" containerID="4d566d8d0c1f2395dae51975108188a50f273b881992f487f3b84531a9f2e9f1" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.983509 4942 scope.go:117] "RemoveContainer" containerID="e0015f6cb0ed0e4e677017a14f5fcb4378f27372b8c41b1fdca89664675f56a0" Feb 18 19:46:53 crc kubenswrapper[4942]: I0218 19:46:53.037207 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:53 crc kubenswrapper[4942]: E0218 19:46:53.038667 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:56 crc kubenswrapper[4942]: I0218 19:46:56.373657 4942 generic.go:334] "Generic (PLEG): container finished" podID="59349fa4-b215-47f3-93a7-7e9aca054947" containerID="865f01125c6391deacd831979ae4a148f4a3a2136ebe5b39793d52d94a72dbb3" exitCode=0 Feb 18 19:46:56 crc kubenswrapper[4942]: I0218 19:46:56.373721 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" event={"ID":"59349fa4-b215-47f3-93a7-7e9aca054947","Type":"ContainerDied","Data":"865f01125c6391deacd831979ae4a148f4a3a2136ebe5b39793d52d94a72dbb3"} Feb 18 19:46:57 crc kubenswrapper[4942]: I0218 19:46:57.805038 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:57 crc kubenswrapper[4942]: I0218 19:46:57.992612 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") pod \"59349fa4-b215-47f3-93a7-7e9aca054947\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " Feb 18 19:46:57 crc kubenswrapper[4942]: I0218 19:46:57.992922 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") pod \"59349fa4-b215-47f3-93a7-7e9aca054947\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " Feb 18 19:46:57 crc kubenswrapper[4942]: I0218 19:46:57.993163 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") pod \"59349fa4-b215-47f3-93a7-7e9aca054947\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.000426 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7" (OuterVolumeSpecName: "kube-api-access-8fmj7") pod "59349fa4-b215-47f3-93a7-7e9aca054947" (UID: "59349fa4-b215-47f3-93a7-7e9aca054947"). InnerVolumeSpecName "kube-api-access-8fmj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.051225 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory" (OuterVolumeSpecName: "inventory") pod "59349fa4-b215-47f3-93a7-7e9aca054947" (UID: "59349fa4-b215-47f3-93a7-7e9aca054947"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.060416 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "59349fa4-b215-47f3-93a7-7e9aca054947" (UID: "59349fa4-b215-47f3-93a7-7e9aca054947"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.096124 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.096411 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.096501 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.401524 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" event={"ID":"59349fa4-b215-47f3-93a7-7e9aca054947","Type":"ContainerDied","Data":"49e8b3eead57c998bc9134855853dc809162614b2ed5c9cea7ed27fa70db4276"} Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.401608 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e8b3eead57c998bc9134855853dc809162614b2ed5c9cea7ed27fa70db4276" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.402248 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.509796 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd"] Feb 18 19:46:58 crc kubenswrapper[4942]: E0218 19:46:58.510311 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59349fa4-b215-47f3-93a7-7e9aca054947" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.510413 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="59349fa4-b215-47f3-93a7-7e9aca054947" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.510675 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="59349fa4-b215-47f3-93a7-7e9aca054947" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.512750 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.518927 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.519099 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.519254 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.519408 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.525802 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd"] Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.606276 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.606313 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.607728 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.709292 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.709547 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.709589 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.716081 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.721506 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.726783 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.883863 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:59 crc kubenswrapper[4942]: W0218 19:46:59.492054 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa31c62_3ec4_4764_b3c9_915f2ed0d979.slice/crio-40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6 WatchSource:0}: Error finding container 40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6: Status 404 returned error can't find the container with id 40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6 Feb 18 19:46:59 crc kubenswrapper[4942]: I0218 19:46:59.498723 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd"] Feb 18 19:47:00 crc kubenswrapper[4942]: I0218 19:47:00.431879 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" event={"ID":"3aa31c62-3ec4-4764-b3c9-915f2ed0d979","Type":"ContainerStarted","Data":"b8781f6763e3c28d5b28c6dbe67fce458666fad6da6df65d68c5f9c6691cc2d4"} Feb 18 19:47:00 crc kubenswrapper[4942]: I0218 19:47:00.432152 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" event={"ID":"3aa31c62-3ec4-4764-b3c9-915f2ed0d979","Type":"ContainerStarted","Data":"40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6"} Feb 18 19:47:00 crc kubenswrapper[4942]: I0218 19:47:00.472671 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" podStartSLOduration=2.00583914 podStartE2EDuration="2.472651659s" podCreationTimestamp="2026-02-18 19:46:58 +0000 UTC" firstStartedPulling="2026-02-18 19:46:59.494746982 +0000 UTC m=+1779.199679667" lastFinishedPulling="2026-02-18 19:46:59.961559481 +0000 UTC m=+1779.666492186" observedRunningTime="2026-02-18 19:47:00.458270679 +0000 UTC m=+1780.163203364" watchObservedRunningTime="2026-02-18 19:47:00.472651659 +0000 UTC m=+1780.177584334" Feb 18 19:47:01 crc kubenswrapper[4942]: I0218 19:47:01.074814 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:47:01 crc kubenswrapper[4942]: I0218 19:47:01.091264 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:47:03 crc kubenswrapper[4942]: I0218 19:47:03.055794 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14c764c-c1b5-4196-a48b-2aff4c38782b" path="/var/lib/kubelet/pods/e14c764c-c1b5-4196-a48b-2aff4c38782b/volumes" Feb 18 19:47:04 crc kubenswrapper[4942]: I0218 19:47:04.036331 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:04 crc kubenswrapper[4942]: E0218 19:47:04.036911 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:47:19 crc kubenswrapper[4942]: I0218 19:47:19.037074 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:19 crc kubenswrapper[4942]: E0218 19:47:19.038144 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:47:23 crc kubenswrapper[4942]: I0218 19:47:23.068831 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:47:23 crc kubenswrapper[4942]: I0218 19:47:23.069147 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:47:23 crc kubenswrapper[4942]: I0218 19:47:23.076256 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:47:23 crc kubenswrapper[4942]: I0218 19:47:23.083639 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:47:25 crc kubenswrapper[4942]: I0218 19:47:25.056748 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" path="/var/lib/kubelet/pods/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2/volumes" Feb 18 19:47:25 crc kubenswrapper[4942]: I0218 19:47:25.059561 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a19078-b432-452e-8918-7b0f8c60e632" path="/var/lib/kubelet/pods/b4a19078-b432-452e-8918-7b0f8c60e632/volumes" Feb 18 19:47:31 crc kubenswrapper[4942]: I0218 19:47:31.043175 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:31 crc kubenswrapper[4942]: E0218 19:47:31.043782 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:47:44 crc kubenswrapper[4942]: I0218 19:47:44.036439 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:44 crc kubenswrapper[4942]: E0218 19:47:44.037498 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:47:49 crc kubenswrapper[4942]: I0218 19:47:49.137359 4942 scope.go:117] "RemoveContainer" containerID="2d29442d9649dbaa907e5735ea0dda7657607ca6fa24c4f83c7c2be4ce910a11" Feb 18 19:47:49 crc kubenswrapper[4942]: I0218 19:47:49.187466 4942 scope.go:117] "RemoveContainer" containerID="3d586c465df9e16d18d5348d207063c859dc4c0c45589222afa474013bd766c5" Feb 18 19:47:49 crc kubenswrapper[4942]: I0218 19:47:49.243878 4942 scope.go:117] "RemoveContainer" containerID="ebb11ccd20be89bb58e99f7b4e01c65708315c8dea33a27fefa79d1ee13756e9" Feb 18 19:47:50 crc kubenswrapper[4942]: I0218 19:47:50.039537 4942 generic.go:334] "Generic (PLEG): container finished" podID="3aa31c62-3ec4-4764-b3c9-915f2ed0d979" containerID="b8781f6763e3c28d5b28c6dbe67fce458666fad6da6df65d68c5f9c6691cc2d4" exitCode=0 Feb 18 19:47:50 crc kubenswrapper[4942]: I0218 19:47:50.039579 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" event={"ID":"3aa31c62-3ec4-4764-b3c9-915f2ed0d979","Type":"ContainerDied","Data":"b8781f6763e3c28d5b28c6dbe67fce458666fad6da6df65d68c5f9c6691cc2d4"} Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.499219 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.621568 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") pod \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.621865 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") pod \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.621889 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") pod \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.627971 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js" (OuterVolumeSpecName: "kube-api-access-zb9js") pod "3aa31c62-3ec4-4764-b3c9-915f2ed0d979" (UID: "3aa31c62-3ec4-4764-b3c9-915f2ed0d979"). InnerVolumeSpecName "kube-api-access-zb9js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.655297 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory" (OuterVolumeSpecName: "inventory") pod "3aa31c62-3ec4-4764-b3c9-915f2ed0d979" (UID: "3aa31c62-3ec4-4764-b3c9-915f2ed0d979"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.655830 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3aa31c62-3ec4-4764-b3c9-915f2ed0d979" (UID: "3aa31c62-3ec4-4764-b3c9-915f2ed0d979"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.724871 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.724938 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.724965 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.067565 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" event={"ID":"3aa31c62-3ec4-4764-b3c9-915f2ed0d979","Type":"ContainerDied","Data":"40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6"} Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.067626 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.067661 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.164050 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qlvtb"] Feb 18 19:47:52 crc kubenswrapper[4942]: E0218 19:47:52.164725 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa31c62-3ec4-4764-b3c9-915f2ed0d979" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.164755 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa31c62-3ec4-4764-b3c9-915f2ed0d979" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.165234 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa31c62-3ec4-4764-b3c9-915f2ed0d979" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.166321 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.168165 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.168905 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.169296 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.177621 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qlvtb"] Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.182386 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.337772 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.337939 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.338270 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.440717 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.440842 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.440966 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.447899 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.447986 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.461273 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.486601 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:53 crc kubenswrapper[4942]: I0218 19:47:53.103128 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qlvtb"] Feb 18 19:47:53 crc kubenswrapper[4942]: I0218 19:47:53.115742 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:47:54 crc kubenswrapper[4942]: I0218 19:47:54.089308 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" event={"ID":"746ae939-383d-48e0-98ab-12f13962d6d3","Type":"ContainerStarted","Data":"4503660b2cafb14bcddd36eacccdf5e5fdbd3d637549e419febdc58939a91d5c"} Feb 18 19:47:54 crc kubenswrapper[4942]: I0218 19:47:54.089667 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" event={"ID":"746ae939-383d-48e0-98ab-12f13962d6d3","Type":"ContainerStarted","Data":"083bd6858b06e05ab72bfbfa1958a46743bdb5d0a05522ffbba47f00bc6504b2"} Feb 18 19:47:54 crc kubenswrapper[4942]: I0218 19:47:54.120013 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" podStartSLOduration=1.65213203 podStartE2EDuration="2.119983422s" podCreationTimestamp="2026-02-18 19:47:52 +0000 UTC" firstStartedPulling="2026-02-18 19:47:53.115484445 +0000 UTC m=+1832.820417110" lastFinishedPulling="2026-02-18 19:47:53.583335797 +0000 UTC m=+1833.288268502" observedRunningTime="2026-02-18 19:47:54.106160988 +0000 UTC m=+1833.811093693" watchObservedRunningTime="2026-02-18 19:47:54.119983422 +0000 UTC m=+1833.824916117" Feb 18 19:47:57 crc kubenswrapper[4942]: I0218 19:47:57.036622 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:57 crc kubenswrapper[4942]: E0218 19:47:57.037568 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:48:01 crc kubenswrapper[4942]: I0218 19:48:01.177996 4942 generic.go:334] "Generic (PLEG): container finished" podID="746ae939-383d-48e0-98ab-12f13962d6d3" containerID="4503660b2cafb14bcddd36eacccdf5e5fdbd3d637549e419febdc58939a91d5c" exitCode=0 Feb 18 19:48:01 crc kubenswrapper[4942]: I0218 19:48:01.178069 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" event={"ID":"746ae939-383d-48e0-98ab-12f13962d6d3","Type":"ContainerDied","Data":"4503660b2cafb14bcddd36eacccdf5e5fdbd3d637549e419febdc58939a91d5c"} Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.691504 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.780535 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") pod \"746ae939-383d-48e0-98ab-12f13962d6d3\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.780730 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") pod \"746ae939-383d-48e0-98ab-12f13962d6d3\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.780888 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") pod \"746ae939-383d-48e0-98ab-12f13962d6d3\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.787077 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg" (OuterVolumeSpecName: "kube-api-access-4xdzg") pod "746ae939-383d-48e0-98ab-12f13962d6d3" (UID: "746ae939-383d-48e0-98ab-12f13962d6d3"). InnerVolumeSpecName "kube-api-access-4xdzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.810130 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "746ae939-383d-48e0-98ab-12f13962d6d3" (UID: "746ae939-383d-48e0-98ab-12f13962d6d3"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.811932 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "746ae939-383d-48e0-98ab-12f13962d6d3" (UID: "746ae939-383d-48e0-98ab-12f13962d6d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.883518 4942 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.883569 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.883593 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.208431 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" event={"ID":"746ae939-383d-48e0-98ab-12f13962d6d3","Type":"ContainerDied","Data":"083bd6858b06e05ab72bfbfa1958a46743bdb5d0a05522ffbba47f00bc6504b2"} Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.208493 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="083bd6858b06e05ab72bfbfa1958a46743bdb5d0a05522ffbba47f00bc6504b2" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.208585 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.323596 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb"] Feb 18 19:48:03 crc kubenswrapper[4942]: E0218 19:48:03.324127 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746ae939-383d-48e0-98ab-12f13962d6d3" containerName="ssh-known-hosts-edpm-deployment" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.324152 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="746ae939-383d-48e0-98ab-12f13962d6d3" containerName="ssh-known-hosts-edpm-deployment" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.324449 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="746ae939-383d-48e0-98ab-12f13962d6d3" containerName="ssh-known-hosts-edpm-deployment" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.325300 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.327861 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.328025 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.328421 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.332719 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.338740 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb"] Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.498047 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.498206 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.498658 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.601124 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.601324 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.601419 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.606972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.618393 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.621485 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.648934 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:04 crc kubenswrapper[4942]: I0218 19:48:04.170884 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb"] Feb 18 19:48:04 crc kubenswrapper[4942]: I0218 19:48:04.217989 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" event={"ID":"90bc7193-8433-4354-99c8-e441b477670b","Type":"ContainerStarted","Data":"832e643169e6918b6869e534b927c648a3ced25a15e292d62f10bbfb1e11f139"} Feb 18 19:48:05 crc kubenswrapper[4942]: I0218 19:48:05.231450 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" event={"ID":"90bc7193-8433-4354-99c8-e441b477670b","Type":"ContainerStarted","Data":"c0caeade510de70c11ac75f5bb95ac1d1b35661f2a2e4df9f57a3d094eacf300"} Feb 18 19:48:05 crc kubenswrapper[4942]: I0218 19:48:05.265243 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" podStartSLOduration=1.838542063 podStartE2EDuration="2.265220549s" podCreationTimestamp="2026-02-18 19:48:03 +0000 UTC" firstStartedPulling="2026-02-18 19:48:04.169995601 +0000 UTC m=+1843.874928286" lastFinishedPulling="2026-02-18 19:48:04.596674107 +0000 UTC m=+1844.301606772" observedRunningTime="2026-02-18 19:48:05.252986337 +0000 UTC m=+1844.957919032" watchObservedRunningTime="2026-02-18 19:48:05.265220549 +0000 UTC m=+1844.970153224" Feb 18 19:48:09 crc kubenswrapper[4942]: I0218 19:48:09.036152 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:48:09 crc kubenswrapper[4942]: E0218 19:48:09.037184 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:48:09 crc kubenswrapper[4942]: I0218 19:48:09.064691 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:48:09 crc kubenswrapper[4942]: I0218 19:48:09.079363 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:48:11 crc kubenswrapper[4942]: I0218 19:48:11.049926 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" path="/var/lib/kubelet/pods/2c972a02-9d35-43d1-9ef6-ab99f7cded50/volumes" Feb 18 19:48:13 crc kubenswrapper[4942]: I0218 19:48:13.312740 4942 generic.go:334] "Generic (PLEG): container finished" podID="90bc7193-8433-4354-99c8-e441b477670b" containerID="c0caeade510de70c11ac75f5bb95ac1d1b35661f2a2e4df9f57a3d094eacf300" exitCode=0 Feb 18 19:48:13 crc kubenswrapper[4942]: I0218 19:48:13.312857 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" event={"ID":"90bc7193-8433-4354-99c8-e441b477670b","Type":"ContainerDied","Data":"c0caeade510de70c11ac75f5bb95ac1d1b35661f2a2e4df9f57a3d094eacf300"} Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.760964 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.855430 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") pod \"90bc7193-8433-4354-99c8-e441b477670b\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.855514 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") pod \"90bc7193-8433-4354-99c8-e441b477670b\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.855544 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") pod \"90bc7193-8433-4354-99c8-e441b477670b\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.860956 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548" (OuterVolumeSpecName: "kube-api-access-56548") pod "90bc7193-8433-4354-99c8-e441b477670b" (UID: "90bc7193-8433-4354-99c8-e441b477670b"). InnerVolumeSpecName "kube-api-access-56548". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.888733 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90bc7193-8433-4354-99c8-e441b477670b" (UID: "90bc7193-8433-4354-99c8-e441b477670b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.891342 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory" (OuterVolumeSpecName: "inventory") pod "90bc7193-8433-4354-99c8-e441b477670b" (UID: "90bc7193-8433-4354-99c8-e441b477670b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.960869 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.960918 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.960968 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.337258 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" event={"ID":"90bc7193-8433-4354-99c8-e441b477670b","Type":"ContainerDied","Data":"832e643169e6918b6869e534b927c648a3ced25a15e292d62f10bbfb1e11f139"} Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.337622 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="832e643169e6918b6869e534b927c648a3ced25a15e292d62f10bbfb1e11f139" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.337335 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.445153 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf"] Feb 18 19:48:15 crc kubenswrapper[4942]: E0218 19:48:15.445617 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bc7193-8433-4354-99c8-e441b477670b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.445640 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bc7193-8433-4354-99c8-e441b477670b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.445911 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bc7193-8433-4354-99c8-e441b477670b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.446686 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.449921 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.450648 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.454401 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.455249 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.462943 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf"] Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.586086 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.586348 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.587016 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.688964 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.689050 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.689082 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.693304 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.696299 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.719718 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.775348 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:16 crc kubenswrapper[4942]: I0218 19:48:16.177442 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf"] Feb 18 19:48:16 crc kubenswrapper[4942]: W0218 19:48:16.183033 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79e81623_b595_4683_81b3_89c5a11f8237.slice/crio-0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8 WatchSource:0}: Error finding container 0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8: Status 404 returned error can't find the container with id 0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8 Feb 18 19:48:16 crc kubenswrapper[4942]: I0218 19:48:16.345410 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" event={"ID":"79e81623-b595-4683-81b3-89c5a11f8237","Type":"ContainerStarted","Data":"0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8"} Feb 18 19:48:17 crc kubenswrapper[4942]: I0218 19:48:17.360138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" event={"ID":"79e81623-b595-4683-81b3-89c5a11f8237","Type":"ContainerStarted","Data":"be278f8aa5596652dd2d6708280f59cf7d32aa410dae321c030e034b045c3fe3"} Feb 18 19:48:22 crc kubenswrapper[4942]: I0218 19:48:22.035895 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:48:22 crc kubenswrapper[4942]: E0218 19:48:22.036992 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:48:26 crc kubenswrapper[4942]: I0218 19:48:26.469694 4942 generic.go:334] "Generic (PLEG): container finished" podID="79e81623-b595-4683-81b3-89c5a11f8237" containerID="be278f8aa5596652dd2d6708280f59cf7d32aa410dae321c030e034b045c3fe3" exitCode=0 Feb 18 19:48:26 crc kubenswrapper[4942]: I0218 19:48:26.469796 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" event={"ID":"79e81623-b595-4683-81b3-89c5a11f8237","Type":"ContainerDied","Data":"be278f8aa5596652dd2d6708280f59cf7d32aa410dae321c030e034b045c3fe3"} Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.058651 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.203835 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") pod \"79e81623-b595-4683-81b3-89c5a11f8237\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.204244 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") pod \"79e81623-b595-4683-81b3-89c5a11f8237\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.204468 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") pod \"79e81623-b595-4683-81b3-89c5a11f8237\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.221682 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn" (OuterVolumeSpecName: "kube-api-access-js2pn") pod "79e81623-b595-4683-81b3-89c5a11f8237" (UID: "79e81623-b595-4683-81b3-89c5a11f8237"). InnerVolumeSpecName "kube-api-access-js2pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.240030 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79e81623-b595-4683-81b3-89c5a11f8237" (UID: "79e81623-b595-4683-81b3-89c5a11f8237"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.251530 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory" (OuterVolumeSpecName: "inventory") pod "79e81623-b595-4683-81b3-89c5a11f8237" (UID: "79e81623-b595-4683-81b3-89c5a11f8237"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.307972 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.308007 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.308019 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.514080 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" event={"ID":"79e81623-b595-4683-81b3-89c5a11f8237","Type":"ContainerDied","Data":"0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8"} Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.514150 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.514173 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.628170 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9"] Feb 18 19:48:28 crc kubenswrapper[4942]: E0218 19:48:28.628946 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e81623-b595-4683-81b3-89c5a11f8237" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.629035 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e81623-b595-4683-81b3-89c5a11f8237" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.629333 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e81623-b595-4683-81b3-89c5a11f8237" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.630265 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.635079 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.641654 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.641937 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.642388 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.642602 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.642840 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.643742 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.644127 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.661929 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9"] Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.820958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821040 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821092 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821156 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821196 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821216 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821251 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821290 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821306 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821423 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821455 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821511 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923061 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923115 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923151 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923191 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923300 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923328 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923380 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923435 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923514 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923545 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923571 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923590 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.927359 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.928203 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.928978 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.929879 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.929980 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.930342 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.931491 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.931575 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.931602 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.932069 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.934314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.934632 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.935175 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.943289 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.957172 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:29 crc kubenswrapper[4942]: I0218 19:48:29.563027 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9"] Feb 18 19:48:30 crc kubenswrapper[4942]: I0218 19:48:30.540959 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" event={"ID":"3dd9d927-61b8-4c83-93f9-131ab03cb0cc","Type":"ContainerStarted","Data":"3791c445a8aed4a1c329e58e05be1af60aa43c45c0de7b755485eb69e49cd109"} Feb 18 19:48:30 crc kubenswrapper[4942]: I0218 19:48:30.541324 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" event={"ID":"3dd9d927-61b8-4c83-93f9-131ab03cb0cc","Type":"ContainerStarted","Data":"8eef9dd84c19f1e2b5d433ec3f8f7e827b07e842d030159eea65323eacc71c4a"} Feb 18 19:48:30 crc kubenswrapper[4942]: I0218 19:48:30.576709 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" podStartSLOduration=2.081833057 podStartE2EDuration="2.57668729s" podCreationTimestamp="2026-02-18 19:48:28 +0000 UTC" firstStartedPulling="2026-02-18 19:48:29.567802468 +0000 UTC m=+1869.272735143" lastFinishedPulling="2026-02-18 19:48:30.062656691 +0000 UTC m=+1869.767589376" observedRunningTime="2026-02-18 19:48:30.56227007 +0000 UTC m=+1870.267202755" watchObservedRunningTime="2026-02-18 19:48:30.57668729 +0000 UTC m=+1870.281619965" Feb 18 19:48:37 crc kubenswrapper[4942]: I0218 19:48:37.037437 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:48:37 crc kubenswrapper[4942]: E0218 19:48:37.038397 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:48:49 crc kubenswrapper[4942]: I0218 19:48:49.358391 4942 scope.go:117] "RemoveContainer" containerID="493fbf668fd581eae9f157a3d4dd7cefc935750aeaa50d79a8dc2cadd67f3413" Feb 18 19:48:52 crc kubenswrapper[4942]: I0218 19:48:52.035789 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:48:52 crc kubenswrapper[4942]: E0218 19:48:52.036362 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:04 crc kubenswrapper[4942]: I0218 19:49:04.036120 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:04 crc kubenswrapper[4942]: E0218 19:49:04.036818 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:07 crc kubenswrapper[4942]: I0218 19:49:07.937726 4942 generic.go:334] "Generic (PLEG): container finished" podID="3dd9d927-61b8-4c83-93f9-131ab03cb0cc" containerID="3791c445a8aed4a1c329e58e05be1af60aa43c45c0de7b755485eb69e49cd109" exitCode=0 Feb 18 19:49:07 crc kubenswrapper[4942]: I0218 19:49:07.937843 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" event={"ID":"3dd9d927-61b8-4c83-93f9-131ab03cb0cc","Type":"ContainerDied","Data":"3791c445a8aed4a1c329e58e05be1af60aa43c45c0de7b755485eb69e49cd109"} Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.493275 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621177 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621242 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621301 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621349 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621432 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621477 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621519 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621621 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621800 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621840 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621939 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621987 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.622038 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.622079 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.628099 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.628236 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.630839 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.631739 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.631902 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.632992 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.633080 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl" (OuterVolumeSpecName: "kube-api-access-z8gnl") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "kube-api-access-z8gnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.634038 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.634146 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.636820 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.644875 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.644962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.662907 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory" (OuterVolumeSpecName: "inventory") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.679834 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726278 4942 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726327 4942 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726350 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726374 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726394 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726413 4942 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726433 4942 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726452 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726470 4942 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726492 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726510 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726528 4942 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726548 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726569 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.962009 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" event={"ID":"3dd9d927-61b8-4c83-93f9-131ab03cb0cc","Type":"ContainerDied","Data":"8eef9dd84c19f1e2b5d433ec3f8f7e827b07e842d030159eea65323eacc71c4a"} Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.962088 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eef9dd84c19f1e2b5d433ec3f8f7e827b07e842d030159eea65323eacc71c4a" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.962188 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.106317 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b"] Feb 18 19:49:10 crc kubenswrapper[4942]: E0218 19:49:10.107020 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd9d927-61b8-4c83-93f9-131ab03cb0cc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.107052 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd9d927-61b8-4c83-93f9-131ab03cb0cc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.107412 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd9d927-61b8-4c83-93f9-131ab03cb0cc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.108539 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.113383 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.113691 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.114215 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.114404 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.114740 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.126025 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b"] Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.235988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.236373 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.236529 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.236678 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.236739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338507 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338561 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338642 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338698 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338734 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.339833 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.346996 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.349314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.352431 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.362952 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.431302 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.996294 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b"] Feb 18 19:49:11 crc kubenswrapper[4942]: I0218 19:49:11.982636 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" event={"ID":"bae512dc-7305-4dc5-b47a-524c9b8f57ab","Type":"ContainerStarted","Data":"9f90e2546d8c227bc117262fd71f9c8456682f751fed957e9711a9d5bf183e6f"} Feb 18 19:49:11 crc kubenswrapper[4942]: I0218 19:49:11.983027 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" event={"ID":"bae512dc-7305-4dc5-b47a-524c9b8f57ab","Type":"ContainerStarted","Data":"5352973215c03a80c13a0dd4da3a49efd6b06a0bf8db37bee68ea4a246523de3"} Feb 18 19:49:12 crc kubenswrapper[4942]: I0218 19:49:12.000896 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" podStartSLOduration=1.579119315 podStartE2EDuration="2.000873352s" podCreationTimestamp="2026-02-18 19:49:10 +0000 UTC" firstStartedPulling="2026-02-18 19:49:11.000438282 +0000 UTC m=+1910.705370957" lastFinishedPulling="2026-02-18 19:49:11.422192289 +0000 UTC m=+1911.127124994" observedRunningTime="2026-02-18 19:49:11.999620689 +0000 UTC m=+1911.704553364" watchObservedRunningTime="2026-02-18 19:49:12.000873352 +0000 UTC m=+1911.705806067" Feb 18 19:49:15 crc kubenswrapper[4942]: I0218 19:49:15.035578 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:15 crc kubenswrapper[4942]: E0218 19:49:15.036196 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:29 crc kubenswrapper[4942]: I0218 19:49:29.036085 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:29 crc kubenswrapper[4942]: E0218 19:49:29.036802 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:40 crc kubenswrapper[4942]: I0218 19:49:40.037060 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:40 crc kubenswrapper[4942]: E0218 19:49:40.038226 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:53 crc kubenswrapper[4942]: I0218 19:49:53.037477 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:53 crc kubenswrapper[4942]: E0218 19:49:53.040193 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:50:07 crc kubenswrapper[4942]: I0218 19:50:07.036963 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:50:07 crc kubenswrapper[4942]: E0218 19:50:07.037628 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:50:16 crc kubenswrapper[4942]: I0218 19:50:16.826108 4942 generic.go:334] "Generic (PLEG): container finished" podID="bae512dc-7305-4dc5-b47a-524c9b8f57ab" containerID="9f90e2546d8c227bc117262fd71f9c8456682f751fed957e9711a9d5bf183e6f" exitCode=0 Feb 18 19:50:16 crc kubenswrapper[4942]: I0218 19:50:16.826196 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" event={"ID":"bae512dc-7305-4dc5-b47a-524c9b8f57ab","Type":"ContainerDied","Data":"9f90e2546d8c227bc117262fd71f9c8456682f751fed957e9711a9d5bf183e6f"} Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.278455 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.390616 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.390718 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.390827 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.390885 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.391033 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.396307 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.407061 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57" (OuterVolumeSpecName: "kube-api-access-w6d57") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "kube-api-access-w6d57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.419516 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.429235 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.443854 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory" (OuterVolumeSpecName: "inventory") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493205 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493251 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493261 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493269 4942 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493280 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.843943 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" event={"ID":"bae512dc-7305-4dc5-b47a-524c9b8f57ab","Type":"ContainerDied","Data":"5352973215c03a80c13a0dd4da3a49efd6b06a0bf8db37bee68ea4a246523de3"} Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.844476 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5352973215c03a80c13a0dd4da3a49efd6b06a0bf8db37bee68ea4a246523de3" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.844062 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.021169 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7"] Feb 18 19:50:19 crc kubenswrapper[4942]: E0218 19:50:19.021753 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae512dc-7305-4dc5-b47a-524c9b8f57ab" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.021874 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae512dc-7305-4dc5-b47a-524c9b8f57ab" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.022183 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae512dc-7305-4dc5-b47a-524c9b8f57ab" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.022940 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.025194 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.025559 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.025597 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.025914 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.029029 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.034198 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.049096 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7"] Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.103725 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.103813 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.103886 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.103961 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.104072 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.104130 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209667 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209824 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209854 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209929 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209986 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.214862 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.214861 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.215559 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.215664 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.220056 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.245542 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.341356 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.902242 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7"] Feb 18 19:50:20 crc kubenswrapper[4942]: I0218 19:50:20.868752 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" event={"ID":"7ae4842a-dc23-4e56-a33d-87df95cade92","Type":"ContainerStarted","Data":"261d9e0939cda0fb87a9e22441ae81a2f95c444b1440ca9d5683679ec038b2ae"} Feb 18 19:50:21 crc kubenswrapper[4942]: I0218 19:50:21.051235 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:50:21 crc kubenswrapper[4942]: E0218 19:50:21.052045 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:50:21 crc kubenswrapper[4942]: I0218 19:50:21.882035 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" event={"ID":"7ae4842a-dc23-4e56-a33d-87df95cade92","Type":"ContainerStarted","Data":"75804daf0f8a67a86a9c2a3e7a0911dc2ff820a2e9d5fb6f79a4bfa2b98f6abe"} Feb 18 19:50:21 crc kubenswrapper[4942]: I0218 19:50:21.924932 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" podStartSLOduration=2.209477899 podStartE2EDuration="2.924909717s" podCreationTimestamp="2026-02-18 19:50:19 +0000 UTC" firstStartedPulling="2026-02-18 19:50:19.913007196 +0000 UTC m=+1979.617939861" lastFinishedPulling="2026-02-18 19:50:20.628438994 +0000 UTC m=+1980.333371679" observedRunningTime="2026-02-18 19:50:21.909039419 +0000 UTC m=+1981.613972104" watchObservedRunningTime="2026-02-18 19:50:21.924909717 +0000 UTC m=+1981.629842392" Feb 18 19:50:32 crc kubenswrapper[4942]: I0218 19:50:32.036202 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:50:33 crc kubenswrapper[4942]: I0218 19:50:33.006962 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071"} Feb 18 19:51:11 crc kubenswrapper[4942]: I0218 19:51:11.395366 4942 generic.go:334] "Generic (PLEG): container finished" podID="7ae4842a-dc23-4e56-a33d-87df95cade92" containerID="75804daf0f8a67a86a9c2a3e7a0911dc2ff820a2e9d5fb6f79a4bfa2b98f6abe" exitCode=0 Feb 18 19:51:11 crc kubenswrapper[4942]: I0218 19:51:11.395434 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" event={"ID":"7ae4842a-dc23-4e56-a33d-87df95cade92","Type":"ContainerDied","Data":"75804daf0f8a67a86a9c2a3e7a0911dc2ff820a2e9d5fb6f79a4bfa2b98f6abe"} Feb 18 19:51:12 crc kubenswrapper[4942]: I0218 19:51:12.950399 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065614 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065696 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065841 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065887 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.066040 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.075265 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.075577 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm" (OuterVolumeSpecName: "kube-api-access-jrwfm") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "kube-api-access-jrwfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.096510 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory" (OuterVolumeSpecName: "inventory") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.102987 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.106885 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.110467 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183118 4942 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183172 4942 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183192 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183212 4942 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183234 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183251 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.415441 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" event={"ID":"7ae4842a-dc23-4e56-a33d-87df95cade92","Type":"ContainerDied","Data":"261d9e0939cda0fb87a9e22441ae81a2f95c444b1440ca9d5683679ec038b2ae"} Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.415491 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="261d9e0939cda0fb87a9e22441ae81a2f95c444b1440ca9d5683679ec038b2ae" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.415517 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.527308 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq"] Feb 18 19:51:13 crc kubenswrapper[4942]: E0218 19:51:13.528015 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae4842a-dc23-4e56-a33d-87df95cade92" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.528037 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae4842a-dc23-4e56-a33d-87df95cade92" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.528270 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae4842a-dc23-4e56-a33d-87df95cade92" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.529660 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.531876 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.533019 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.533080 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.533546 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.537563 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.545862 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq"] Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.690870 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.690958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.691049 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.691098 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.691326 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.792908 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.793117 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.793197 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.793257 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.793305 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.798256 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.798800 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.799212 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.799746 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.817258 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.864187 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:14 crc kubenswrapper[4942]: I0218 19:51:14.482558 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq"] Feb 18 19:51:14 crc kubenswrapper[4942]: W0218 19:51:14.487363 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1924338e_aea6_474f_9216_bb7eb32dc5fe.slice/crio-85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f WatchSource:0}: Error finding container 85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f: Status 404 returned error can't find the container with id 85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f Feb 18 19:51:15 crc kubenswrapper[4942]: I0218 19:51:15.441232 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" event={"ID":"1924338e-aea6-474f-9216-bb7eb32dc5fe","Type":"ContainerStarted","Data":"0881a0c3a7d6de31317f11d4bbacc01b597b4d2f4939061d09363608ec65d1f7"} Feb 18 19:51:15 crc kubenswrapper[4942]: I0218 19:51:15.441613 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" event={"ID":"1924338e-aea6-474f-9216-bb7eb32dc5fe","Type":"ContainerStarted","Data":"85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f"} Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.575244 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" podStartSLOduration=54.149974486 podStartE2EDuration="54.575214165s" podCreationTimestamp="2026-02-18 19:51:13 +0000 UTC" firstStartedPulling="2026-02-18 19:51:14.49070449 +0000 UTC m=+2034.195637165" lastFinishedPulling="2026-02-18 19:51:14.915944179 +0000 UTC m=+2034.620876844" observedRunningTime="2026-02-18 19:51:15.478119367 +0000 UTC m=+2035.183052062" watchObservedRunningTime="2026-02-18 19:52:07.575214165 +0000 UTC m=+2087.280146870" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.596044 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.599074 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.610380 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.718240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.718519 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.718715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.820587 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.820951 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.821160 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.821166 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.821435 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.849455 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.932746 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:08 crc kubenswrapper[4942]: I0218 19:52:08.460969 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:09 crc kubenswrapper[4942]: I0218 19:52:09.064914 4942 generic.go:334] "Generic (PLEG): container finished" podID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerID="e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd" exitCode=0 Feb 18 19:52:09 crc kubenswrapper[4942]: I0218 19:52:09.065170 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerDied","Data":"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd"} Feb 18 19:52:09 crc kubenswrapper[4942]: I0218 19:52:09.065193 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerStarted","Data":"c6bd6168be7acfd240df4b763697b61db8aab69181f9ea02390aaa8a2d3ef101"} Feb 18 19:52:11 crc kubenswrapper[4942]: I0218 19:52:11.088237 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerStarted","Data":"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03"} Feb 18 19:52:12 crc kubenswrapper[4942]: I0218 19:52:12.101900 4942 generic.go:334] "Generic (PLEG): container finished" podID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerID="505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03" exitCode=0 Feb 18 19:52:12 crc kubenswrapper[4942]: I0218 19:52:12.101965 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerDied","Data":"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03"} Feb 18 19:52:13 crc kubenswrapper[4942]: I0218 19:52:13.119712 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerStarted","Data":"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01"} Feb 18 19:52:13 crc kubenswrapper[4942]: I0218 19:52:13.155369 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zf5jr" podStartSLOduration=2.708749995 podStartE2EDuration="6.155348352s" podCreationTimestamp="2026-02-18 19:52:07 +0000 UTC" firstStartedPulling="2026-02-18 19:52:09.078590586 +0000 UTC m=+2088.783523251" lastFinishedPulling="2026-02-18 19:52:12.525188933 +0000 UTC m=+2092.230121608" observedRunningTime="2026-02-18 19:52:13.144387491 +0000 UTC m=+2092.849320196" watchObservedRunningTime="2026-02-18 19:52:13.155348352 +0000 UTC m=+2092.860281027" Feb 18 19:52:17 crc kubenswrapper[4942]: I0218 19:52:17.934000 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:17 crc kubenswrapper[4942]: I0218 19:52:17.934638 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:18 crc kubenswrapper[4942]: I0218 19:52:18.995865 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zf5jr" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" probeResult="failure" output=< Feb 18 19:52:18 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:52:18 crc kubenswrapper[4942]: > Feb 18 19:52:27 crc kubenswrapper[4942]: I0218 19:52:27.982332 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:28 crc kubenswrapper[4942]: I0218 19:52:28.044211 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:28 crc kubenswrapper[4942]: I0218 19:52:28.250635 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.277394 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zf5jr" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" containerID="cri-o://08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" gracePeriod=2 Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.770535 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.779605 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") pod \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.779658 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") pod \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.779867 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") pod \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.780584 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities" (OuterVolumeSpecName: "utilities") pod "2ba0702b-f077-473e-9df3-2cc59e94d7d9" (UID: "2ba0702b-f077-473e-9df3-2cc59e94d7d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.787998 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b" (OuterVolumeSpecName: "kube-api-access-8pd6b") pod "2ba0702b-f077-473e-9df3-2cc59e94d7d9" (UID: "2ba0702b-f077-473e-9df3-2cc59e94d7d9"). InnerVolumeSpecName "kube-api-access-8pd6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.881123 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.881150 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.905586 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ba0702b-f077-473e-9df3-2cc59e94d7d9" (UID: "2ba0702b-f077-473e-9df3-2cc59e94d7d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.982505 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295353 4942 generic.go:334] "Generic (PLEG): container finished" podID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerID="08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" exitCode=0 Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295424 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerDied","Data":"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01"} Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295476 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerDied","Data":"c6bd6168be7acfd240df4b763697b61db8aab69181f9ea02390aaa8a2d3ef101"} Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295498 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295513 4942 scope.go:117] "RemoveContainer" containerID="08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.328076 4942 scope.go:117] "RemoveContainer" containerID="505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.347367 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.360535 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.371570 4942 scope.go:117] "RemoveContainer" containerID="e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.420204 4942 scope.go:117] "RemoveContainer" containerID="08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" Feb 18 19:52:30 crc kubenswrapper[4942]: E0218 19:52:30.420973 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01\": container with ID starting with 08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01 not found: ID does not exist" containerID="08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.421055 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01"} err="failed to get container status \"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01\": rpc error: code = NotFound desc = could not find container \"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01\": container with ID starting with 08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01 not found: ID does not exist" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.421095 4942 scope.go:117] "RemoveContainer" containerID="505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03" Feb 18 19:52:30 crc kubenswrapper[4942]: E0218 19:52:30.421679 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03\": container with ID starting with 505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03 not found: ID does not exist" containerID="505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.421706 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03"} err="failed to get container status \"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03\": rpc error: code = NotFound desc = could not find container \"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03\": container with ID starting with 505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03 not found: ID does not exist" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.421723 4942 scope.go:117] "RemoveContainer" containerID="e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd" Feb 18 19:52:30 crc kubenswrapper[4942]: E0218 19:52:30.422158 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd\": container with ID starting with e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd not found: ID does not exist" containerID="e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.422196 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd"} err="failed to get container status \"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd\": rpc error: code = NotFound desc = could not find container \"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd\": container with ID starting with e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd not found: ID does not exist" Feb 18 19:52:31 crc kubenswrapper[4942]: I0218 19:52:31.052557 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" path="/var/lib/kubelet/pods/2ba0702b-f077-473e-9df3-2cc59e94d7d9/volumes" Feb 18 19:52:53 crc kubenswrapper[4942]: I0218 19:52:53.740864 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:52:53 crc kubenswrapper[4942]: I0218 19:52:53.741378 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.872726 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:15 crc kubenswrapper[4942]: E0218 19:53:15.873879 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="extract-utilities" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.873898 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="extract-utilities" Feb 18 19:53:15 crc kubenswrapper[4942]: E0218 19:53:15.873925 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="extract-content" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.873934 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="extract-content" Feb 18 19:53:15 crc kubenswrapper[4942]: E0218 19:53:15.873950 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.873958 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.874247 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.876335 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.903098 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.078537 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.078605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.078764 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.179903 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.180053 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.180120 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.180471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.180502 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.202220 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.203764 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.728265 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.791046 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerStarted","Data":"d2330edf5065932b60465224de9d122c8738d3cdd3854e111c8033441e3e1ef0"} Feb 18 19:53:17 crc kubenswrapper[4942]: I0218 19:53:17.803589 4942 generic.go:334] "Generic (PLEG): container finished" podID="959568c6-1106-46e0-89f4-d10e629dc2be" containerID="c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410" exitCode=0 Feb 18 19:53:17 crc kubenswrapper[4942]: I0218 19:53:17.803683 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerDied","Data":"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410"} Feb 18 19:53:17 crc kubenswrapper[4942]: I0218 19:53:17.808351 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:53:18 crc kubenswrapper[4942]: I0218 19:53:18.813786 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerStarted","Data":"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83"} Feb 18 19:53:19 crc kubenswrapper[4942]: I0218 19:53:19.827277 4942 generic.go:334] "Generic (PLEG): container finished" podID="959568c6-1106-46e0-89f4-d10e629dc2be" containerID="7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83" exitCode=0 Feb 18 19:53:19 crc kubenswrapper[4942]: I0218 19:53:19.827337 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerDied","Data":"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83"} Feb 18 19:53:20 crc kubenswrapper[4942]: I0218 19:53:20.836729 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerStarted","Data":"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4"} Feb 18 19:53:20 crc kubenswrapper[4942]: I0218 19:53:20.860721 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9qvpq" podStartSLOduration=3.466546861 podStartE2EDuration="5.860704699s" podCreationTimestamp="2026-02-18 19:53:15 +0000 UTC" firstStartedPulling="2026-02-18 19:53:17.808027409 +0000 UTC m=+2157.512960084" lastFinishedPulling="2026-02-18 19:53:20.202185237 +0000 UTC m=+2159.907117922" observedRunningTime="2026-02-18 19:53:20.853937129 +0000 UTC m=+2160.558869804" watchObservedRunningTime="2026-02-18 19:53:20.860704699 +0000 UTC m=+2160.565637364" Feb 18 19:53:23 crc kubenswrapper[4942]: I0218 19:53:23.740718 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:53:23 crc kubenswrapper[4942]: I0218 19:53:23.741184 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:53:26 crc kubenswrapper[4942]: I0218 19:53:26.204747 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:26 crc kubenswrapper[4942]: I0218 19:53:26.206023 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:26 crc kubenswrapper[4942]: I0218 19:53:26.276832 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:26 crc kubenswrapper[4942]: I0218 19:53:26.962051 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:27 crc kubenswrapper[4942]: I0218 19:53:27.008778 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:28 crc kubenswrapper[4942]: I0218 19:53:28.905384 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9qvpq" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="registry-server" containerID="cri-o://413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" gracePeriod=2 Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.449082 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.555684 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") pod \"959568c6-1106-46e0-89f4-d10e629dc2be\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.556003 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") pod \"959568c6-1106-46e0-89f4-d10e629dc2be\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.556124 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") pod \"959568c6-1106-46e0-89f4-d10e629dc2be\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.556814 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities" (OuterVolumeSpecName: "utilities") pod "959568c6-1106-46e0-89f4-d10e629dc2be" (UID: "959568c6-1106-46e0-89f4-d10e629dc2be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.562886 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs" (OuterVolumeSpecName: "kube-api-access-j5lcs") pod "959568c6-1106-46e0-89f4-d10e629dc2be" (UID: "959568c6-1106-46e0-89f4-d10e629dc2be"). InnerVolumeSpecName "kube-api-access-j5lcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.671959 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.671991 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.706396 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "959568c6-1106-46e0-89f4-d10e629dc2be" (UID: "959568c6-1106-46e0-89f4-d10e629dc2be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.773532 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.917800 4942 generic.go:334] "Generic (PLEG): container finished" podID="959568c6-1106-46e0-89f4-d10e629dc2be" containerID="413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" exitCode=0 Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.917847 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerDied","Data":"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4"} Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.917876 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerDied","Data":"d2330edf5065932b60465224de9d122c8738d3cdd3854e111c8033441e3e1ef0"} Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.917894 4942 scope.go:117] "RemoveContainer" containerID="413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.918028 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.949084 4942 scope.go:117] "RemoveContainer" containerID="7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.977308 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.979810 4942 scope.go:117] "RemoveContainer" containerID="c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.989289 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.030011 4942 scope.go:117] "RemoveContainer" containerID="413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" Feb 18 19:53:30 crc kubenswrapper[4942]: E0218 19:53:30.031370 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4\": container with ID starting with 413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4 not found: ID does not exist" containerID="413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.031444 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4"} err="failed to get container status \"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4\": rpc error: code = NotFound desc = could not find container \"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4\": container with ID starting with 413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4 not found: ID does not exist" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.031487 4942 scope.go:117] "RemoveContainer" containerID="7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83" Feb 18 19:53:30 crc kubenswrapper[4942]: E0218 19:53:30.031991 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83\": container with ID starting with 7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83 not found: ID does not exist" containerID="7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.032068 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83"} err="failed to get container status \"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83\": rpc error: code = NotFound desc = could not find container \"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83\": container with ID starting with 7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83 not found: ID does not exist" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.032112 4942 scope.go:117] "RemoveContainer" containerID="c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410" Feb 18 19:53:30 crc kubenswrapper[4942]: E0218 19:53:30.032572 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410\": container with ID starting with c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410 not found: ID does not exist" containerID="c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.032675 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410"} err="failed to get container status \"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410\": rpc error: code = NotFound desc = could not find container \"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410\": container with ID starting with c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410 not found: ID does not exist" Feb 18 19:53:31 crc kubenswrapper[4942]: I0218 19:53:31.056445 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" path="/var/lib/kubelet/pods/959568c6-1106-46e0-89f4-d10e629dc2be/volumes" Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.740976 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.741492 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.741550 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.742230 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.742300 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071" gracePeriod=600 Feb 18 19:53:54 crc kubenswrapper[4942]: I0218 19:53:54.151479 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071" exitCode=0 Feb 18 19:53:54 crc kubenswrapper[4942]: I0218 19:53:54.151552 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071"} Feb 18 19:53:54 crc kubenswrapper[4942]: I0218 19:53:54.152069 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d"} Feb 18 19:53:54 crc kubenswrapper[4942]: I0218 19:53:54.152106 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.868587 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:01 crc kubenswrapper[4942]: E0218 19:54:01.869628 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="registry-server" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.869641 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="registry-server" Feb 18 19:54:01 crc kubenswrapper[4942]: E0218 19:54:01.869669 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="extract-utilities" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.869676 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="extract-utilities" Feb 18 19:54:01 crc kubenswrapper[4942]: E0218 19:54:01.869702 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="extract-content" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.869710 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="extract-content" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.870526 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="registry-server" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.874039 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.885585 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.029981 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.030116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.030196 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.132123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.132210 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.132250 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.133015 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.133176 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.154971 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.205523 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.683897 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:03 crc kubenswrapper[4942]: I0218 19:54:03.246693 4942 generic.go:334] "Generic (PLEG): container finished" podID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerID="2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a" exitCode=0 Feb 18 19:54:03 crc kubenswrapper[4942]: I0218 19:54:03.246788 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerDied","Data":"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a"} Feb 18 19:54:03 crc kubenswrapper[4942]: I0218 19:54:03.247957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerStarted","Data":"3827ab588f8db14da1f2e9a66731d1db7bc3e013ccdb2c77ca4f1d290292025b"} Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.258252 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerStarted","Data":"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8"} Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.664566 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.666909 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.678538 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.787293 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.787449 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.787498 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.889712 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.889825 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.889904 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.890327 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.890342 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.918744 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.991501 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:05 crc kubenswrapper[4942]: I0218 19:54:05.269003 4942 generic.go:334] "Generic (PLEG): container finished" podID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerID="ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8" exitCode=0 Feb 18 19:54:05 crc kubenswrapper[4942]: I0218 19:54:05.269315 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerDied","Data":"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8"} Feb 18 19:54:05 crc kubenswrapper[4942]: I0218 19:54:05.533517 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:05 crc kubenswrapper[4942]: W0218 19:54:05.539449 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod383667c1_9137_4f4f_a870_3bbc3dee3050.slice/crio-0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3 WatchSource:0}: Error finding container 0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3: Status 404 returned error can't find the container with id 0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3 Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.285353 4942 generic.go:334] "Generic (PLEG): container finished" podID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerID="fced62a823aabc9eb96d7dc1c21c39c26f67347f087ea0b1c45827cef7157377" exitCode=0 Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.285414 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerDied","Data":"fced62a823aabc9eb96d7dc1c21c39c26f67347f087ea0b1c45827cef7157377"} Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.286002 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerStarted","Data":"0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3"} Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.291431 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerStarted","Data":"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb"} Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.329242 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4hkw6" podStartSLOduration=2.9359082340000002 podStartE2EDuration="5.329218359s" podCreationTimestamp="2026-02-18 19:54:01 +0000 UTC" firstStartedPulling="2026-02-18 19:54:03.248998178 +0000 UTC m=+2202.953930833" lastFinishedPulling="2026-02-18 19:54:05.642308303 +0000 UTC m=+2205.347240958" observedRunningTime="2026-02-18 19:54:06.325839039 +0000 UTC m=+2206.030771704" watchObservedRunningTime="2026-02-18 19:54:06.329218359 +0000 UTC m=+2206.034151044" Feb 18 19:54:08 crc kubenswrapper[4942]: I0218 19:54:08.309618 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerStarted","Data":"01241740eda1e01b1148596092553039bc8d0f4fa82bfe1851e652e1a9db2c10"} Feb 18 19:54:09 crc kubenswrapper[4942]: I0218 19:54:09.320267 4942 generic.go:334] "Generic (PLEG): container finished" podID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerID="01241740eda1e01b1148596092553039bc8d0f4fa82bfe1851e652e1a9db2c10" exitCode=0 Feb 18 19:54:09 crc kubenswrapper[4942]: I0218 19:54:09.320341 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerDied","Data":"01241740eda1e01b1148596092553039bc8d0f4fa82bfe1851e652e1a9db2c10"} Feb 18 19:54:10 crc kubenswrapper[4942]: I0218 19:54:10.331728 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerStarted","Data":"e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec"} Feb 18 19:54:10 crc kubenswrapper[4942]: I0218 19:54:10.360856 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lszsq" podStartSLOduration=2.891764562 podStartE2EDuration="6.360833546s" podCreationTimestamp="2026-02-18 19:54:04 +0000 UTC" firstStartedPulling="2026-02-18 19:54:06.288333473 +0000 UTC m=+2205.993266148" lastFinishedPulling="2026-02-18 19:54:09.757402447 +0000 UTC m=+2209.462335132" observedRunningTime="2026-02-18 19:54:10.353779739 +0000 UTC m=+2210.058712424" watchObservedRunningTime="2026-02-18 19:54:10.360833546 +0000 UTC m=+2210.065766211" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.205651 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.206253 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.280887 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.403376 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.887188 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.364953 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4hkw6" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="registry-server" containerID="cri-o://1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" gracePeriod=2 Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.809627 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.887814 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") pod \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.887882 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") pod \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.888059 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") pod \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.889475 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities" (OuterVolumeSpecName: "utilities") pod "5e924cf0-b2c6-4897-b2e7-4f9b8897d083" (UID: "5e924cf0-b2c6-4897-b2e7-4f9b8897d083"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.896731 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4" (OuterVolumeSpecName: "kube-api-access-9lck4") pod "5e924cf0-b2c6-4897-b2e7-4f9b8897d083" (UID: "5e924cf0-b2c6-4897-b2e7-4f9b8897d083"). InnerVolumeSpecName "kube-api-access-9lck4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.991017 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.991052 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.991942 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.992048 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.053777 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.377962 4942 generic.go:334] "Generic (PLEG): container finished" podID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerID="1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" exitCode=0 Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.378231 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerDied","Data":"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb"} Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.378295 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerDied","Data":"3827ab588f8db14da1f2e9a66731d1db7bc3e013ccdb2c77ca4f1d290292025b"} Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.378317 4942 scope.go:117] "RemoveContainer" containerID="1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.378422 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.420446 4942 scope.go:117] "RemoveContainer" containerID="ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.426320 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.441865 4942 scope.go:117] "RemoveContainer" containerID="2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.496266 4942 scope.go:117] "RemoveContainer" containerID="1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" Feb 18 19:54:15 crc kubenswrapper[4942]: E0218 19:54:15.498372 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb\": container with ID starting with 1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb not found: ID does not exist" containerID="1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.498418 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb"} err="failed to get container status \"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb\": rpc error: code = NotFound desc = could not find container \"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb\": container with ID starting with 1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb not found: ID does not exist" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.498445 4942 scope.go:117] "RemoveContainer" containerID="ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8" Feb 18 19:54:15 crc kubenswrapper[4942]: E0218 19:54:15.498784 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8\": container with ID starting with ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8 not found: ID does not exist" containerID="ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.498803 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8"} err="failed to get container status \"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8\": rpc error: code = NotFound desc = could not find container \"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8\": container with ID starting with ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8 not found: ID does not exist" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.498817 4942 scope.go:117] "RemoveContainer" containerID="2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a" Feb 18 19:54:15 crc kubenswrapper[4942]: E0218 19:54:15.499152 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a\": container with ID starting with 2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a not found: ID does not exist" containerID="2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.499172 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a"} err="failed to get container status \"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a\": rpc error: code = NotFound desc = could not find container \"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a\": container with ID starting with 2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a not found: ID does not exist" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.547505 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e924cf0-b2c6-4897-b2e7-4f9b8897d083" (UID: "5e924cf0-b2c6-4897-b2e7-4f9b8897d083"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.605993 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.738075 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.748416 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:17 crc kubenswrapper[4942]: I0218 19:54:17.054708 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" path="/var/lib/kubelet/pods/5e924cf0-b2c6-4897-b2e7-4f9b8897d083/volumes" Feb 18 19:54:17 crc kubenswrapper[4942]: I0218 19:54:17.461949 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:18 crc kubenswrapper[4942]: I0218 19:54:18.410711 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lszsq" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="registry-server" containerID="cri-o://e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec" gracePeriod=2 Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.422687 4942 generic.go:334] "Generic (PLEG): container finished" podID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerID="e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec" exitCode=0 Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.422768 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerDied","Data":"e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec"} Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.423152 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerDied","Data":"0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3"} Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.423175 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.459463 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.486728 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") pod \"383667c1-9137-4f4f-a870-3bbc3dee3050\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.486839 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") pod \"383667c1-9137-4f4f-a870-3bbc3dee3050\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.487031 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") pod \"383667c1-9137-4f4f-a870-3bbc3dee3050\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.487584 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities" (OuterVolumeSpecName: "utilities") pod "383667c1-9137-4f4f-a870-3bbc3dee3050" (UID: "383667c1-9137-4f4f-a870-3bbc3dee3050"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.492061 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf" (OuterVolumeSpecName: "kube-api-access-h69hf") pod "383667c1-9137-4f4f-a870-3bbc3dee3050" (UID: "383667c1-9137-4f4f-a870-3bbc3dee3050"). InnerVolumeSpecName "kube-api-access-h69hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.513089 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "383667c1-9137-4f4f-a870-3bbc3dee3050" (UID: "383667c1-9137-4f4f-a870-3bbc3dee3050"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.589411 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.589439 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.589449 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[4942]: I0218 19:54:20.434283 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:20 crc kubenswrapper[4942]: I0218 19:54:20.484051 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:20 crc kubenswrapper[4942]: I0218 19:54:20.494427 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:21 crc kubenswrapper[4942]: I0218 19:54:21.074697 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" path="/var/lib/kubelet/pods/383667c1-9137-4f4f-a870-3bbc3dee3050/volumes" Feb 18 19:55:05 crc kubenswrapper[4942]: I0218 19:55:05.874449 4942 generic.go:334] "Generic (PLEG): container finished" podID="1924338e-aea6-474f-9216-bb7eb32dc5fe" containerID="0881a0c3a7d6de31317f11d4bbacc01b597b4d2f4939061d09363608ec65d1f7" exitCode=0 Feb 18 19:55:05 crc kubenswrapper[4942]: I0218 19:55:05.874568 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" event={"ID":"1924338e-aea6-474f-9216-bb7eb32dc5fe","Type":"ContainerDied","Data":"0881a0c3a7d6de31317f11d4bbacc01b597b4d2f4939061d09363608ec65d1f7"} Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.317097 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.377812 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.377914 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.377945 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.377991 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.378080 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.384081 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.384564 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc" (OuterVolumeSpecName: "kube-api-access-llwjc") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "kube-api-access-llwjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.405390 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.407819 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory" (OuterVolumeSpecName: "inventory") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.415667 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480607 4942 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480660 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480684 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480704 4942 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480723 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.894107 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" event={"ID":"1924338e-aea6-474f-9216-bb7eb32dc5fe","Type":"ContainerDied","Data":"85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f"} Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.894154 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.894155 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.019249 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr"] Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020409 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="extract-utilities" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020426 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="extract-utilities" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020453 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="extract-utilities" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020459 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="extract-utilities" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020484 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020494 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020504 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020513 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020539 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="extract-content" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020545 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="extract-content" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020551 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="extract-content" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020558 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="extract-content" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020570 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1924338e-aea6-474f-9216-bb7eb32dc5fe" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020580 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1924338e-aea6-474f-9216-bb7eb32dc5fe" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020982 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.021008 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.021029 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1924338e-aea6-474f-9216-bb7eb32dc5fe" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.022086 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.026241 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.026440 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028105 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028323 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028435 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028561 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028696 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.051033 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr"] Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.092739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.092799 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.092927 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.092967 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093025 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093109 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093133 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093285 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093394 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195439 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195539 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195574 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195623 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195663 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195706 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195731 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195835 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.197151 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.200449 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.201355 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.201583 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.210555 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.210657 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.210805 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.215847 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.219380 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.340561 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.949871 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr"] Feb 18 19:55:08 crc kubenswrapper[4942]: W0218 19:55:08.955182 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f25881_283c_4b0e_9f7f_e7e8ae0dfc70.slice/crio-26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed WatchSource:0}: Error finding container 26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed: Status 404 returned error can't find the container with id 26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed Feb 18 19:55:09 crc kubenswrapper[4942]: I0218 19:55:09.914193 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" event={"ID":"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70","Type":"ContainerStarted","Data":"64878fa884de6ab75395084ab5066c4598e313f06a7c48d59600498c9717bbc7"} Feb 18 19:55:09 crc kubenswrapper[4942]: I0218 19:55:09.914753 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" event={"ID":"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70","Type":"ContainerStarted","Data":"26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed"} Feb 18 19:55:09 crc kubenswrapper[4942]: I0218 19:55:09.945825 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" podStartSLOduration=2.491454004 podStartE2EDuration="2.945806906s" podCreationTimestamp="2026-02-18 19:55:07 +0000 UTC" firstStartedPulling="2026-02-18 19:55:08.958347982 +0000 UTC m=+2268.663280647" lastFinishedPulling="2026-02-18 19:55:09.412700884 +0000 UTC m=+2269.117633549" observedRunningTime="2026-02-18 19:55:09.937926987 +0000 UTC m=+2269.642859692" watchObservedRunningTime="2026-02-18 19:55:09.945806906 +0000 UTC m=+2269.650739561" Feb 18 19:56:23 crc kubenswrapper[4942]: I0218 19:56:23.740718 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:56:23 crc kubenswrapper[4942]: I0218 19:56:23.741371 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:56:53 crc kubenswrapper[4942]: I0218 19:56:53.740961 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:56:53 crc kubenswrapper[4942]: I0218 19:56:53.741537 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.740878 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.741458 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.741509 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.742248 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.742307 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" gracePeriod=600 Feb 18 19:57:23 crc kubenswrapper[4942]: E0218 19:57:23.880273 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:57:24 crc kubenswrapper[4942]: I0218 19:57:24.271179 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" exitCode=0 Feb 18 19:57:24 crc kubenswrapper[4942]: I0218 19:57:24.271236 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d"} Feb 18 19:57:24 crc kubenswrapper[4942]: I0218 19:57:24.271506 4942 scope.go:117] "RemoveContainer" containerID="c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071" Feb 18 19:57:24 crc kubenswrapper[4942]: I0218 19:57:24.272757 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:57:24 crc kubenswrapper[4942]: E0218 19:57:24.273131 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:57:33 crc kubenswrapper[4942]: I0218 19:57:33.357367 4942 generic.go:334] "Generic (PLEG): container finished" podID="d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" containerID="64878fa884de6ab75395084ab5066c4598e313f06a7c48d59600498c9717bbc7" exitCode=0 Feb 18 19:57:33 crc kubenswrapper[4942]: I0218 19:57:33.357440 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" event={"ID":"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70","Type":"ContainerDied","Data":"64878fa884de6ab75395084ab5066c4598e313f06a7c48d59600498c9717bbc7"} Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.835991 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.971934 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.972304 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.972943 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973007 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973098 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973158 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973230 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973264 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973306 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.985154 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.995013 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg" (OuterVolumeSpecName: "kube-api-access-s5vhg") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "kube-api-access-s5vhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.998511 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.999544 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory" (OuterVolumeSpecName: "inventory") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.000568 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.001824 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.006478 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.015156 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.022054 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076362 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076561 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076670 4942 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076824 4942 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076926 4942 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.077003 4942 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.077104 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.077192 4942 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.077267 4942 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.381722 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" event={"ID":"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70","Type":"ContainerDied","Data":"26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed"} Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.381773 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.382103 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.517778 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5"] Feb 18 19:57:35 crc kubenswrapper[4942]: E0218 19:57:35.518329 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.518349 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.518586 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.523728 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.527657 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5"] Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.528801 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.529072 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.529292 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.528996 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.529493 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.688931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689080 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689161 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689211 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689244 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689309 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689368 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791515 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791610 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791641 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791807 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791852 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791885 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.795298 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.796421 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.796495 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.796533 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.799525 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.806544 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.810118 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.862559 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:36 crc kubenswrapper[4942]: I0218 19:57:36.384391 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5"] Feb 18 19:57:37 crc kubenswrapper[4942]: I0218 19:57:37.036358 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:57:37 crc kubenswrapper[4942]: E0218 19:57:37.036866 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:57:37 crc kubenswrapper[4942]: I0218 19:57:37.400170 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" event={"ID":"5ea9c52a-c8f0-4189-a995-202a5a8a07db","Type":"ContainerStarted","Data":"6617e1c641d88ba7eecc0e139fcf0fe9a178e976a0890aa0716fd002e93b4732"} Feb 18 19:57:37 crc kubenswrapper[4942]: I0218 19:57:37.400241 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" event={"ID":"5ea9c52a-c8f0-4189-a995-202a5a8a07db","Type":"ContainerStarted","Data":"0c96156ca263a32dd6a9652c4b243415d18ac4f84af1e982cee89d29282773ff"} Feb 18 19:57:37 crc kubenswrapper[4942]: I0218 19:57:37.423185 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" podStartSLOduration=1.9631740720000002 podStartE2EDuration="2.423166323s" podCreationTimestamp="2026-02-18 19:57:35 +0000 UTC" firstStartedPulling="2026-02-18 19:57:36.397696681 +0000 UTC m=+2416.102629346" lastFinishedPulling="2026-02-18 19:57:36.857688922 +0000 UTC m=+2416.562621597" observedRunningTime="2026-02-18 19:57:37.420168394 +0000 UTC m=+2417.125101059" watchObservedRunningTime="2026-02-18 19:57:37.423166323 +0000 UTC m=+2417.128098988" Feb 18 19:57:48 crc kubenswrapper[4942]: I0218 19:57:48.035735 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:57:48 crc kubenswrapper[4942]: E0218 19:57:48.036742 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:02 crc kubenswrapper[4942]: I0218 19:58:02.036285 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:02 crc kubenswrapper[4942]: E0218 19:58:02.037311 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:16 crc kubenswrapper[4942]: I0218 19:58:16.035721 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:16 crc kubenswrapper[4942]: E0218 19:58:16.036488 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:28 crc kubenswrapper[4942]: I0218 19:58:28.036261 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:28 crc kubenswrapper[4942]: E0218 19:58:28.037435 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:42 crc kubenswrapper[4942]: I0218 19:58:42.037148 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:42 crc kubenswrapper[4942]: E0218 19:58:42.038738 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:54 crc kubenswrapper[4942]: I0218 19:58:54.036111 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:54 crc kubenswrapper[4942]: E0218 19:58:54.037330 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:06 crc kubenswrapper[4942]: I0218 19:59:06.039699 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:06 crc kubenswrapper[4942]: E0218 19:59:06.040670 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:19 crc kubenswrapper[4942]: I0218 19:59:19.036863 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:19 crc kubenswrapper[4942]: E0218 19:59:19.037680 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:34 crc kubenswrapper[4942]: I0218 19:59:34.036530 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:34 crc kubenswrapper[4942]: E0218 19:59:34.037821 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:38 crc kubenswrapper[4942]: I0218 19:59:38.730362 4942 generic.go:334] "Generic (PLEG): container finished" podID="5ea9c52a-c8f0-4189-a995-202a5a8a07db" containerID="6617e1c641d88ba7eecc0e139fcf0fe9a178e976a0890aa0716fd002e93b4732" exitCode=0 Feb 18 19:59:38 crc kubenswrapper[4942]: I0218 19:59:38.730424 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" event={"ID":"5ea9c52a-c8f0-4189-a995-202a5a8a07db","Type":"ContainerDied","Data":"6617e1c641d88ba7eecc0e139fcf0fe9a178e976a0890aa0716fd002e93b4732"} Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.184917 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301471 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301532 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301558 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301660 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301829 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301901 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301964 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.308315 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.308965 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6" (OuterVolumeSpecName: "kube-api-access-6xpn6") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "kube-api-access-6xpn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.335850 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.336558 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.342141 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory" (OuterVolumeSpecName: "inventory") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.345042 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.360208 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404531 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404561 4942 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404572 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404580 4942 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404590 4942 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404599 4942 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404608 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.764578 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" event={"ID":"5ea9c52a-c8f0-4189-a995-202a5a8a07db","Type":"ContainerDied","Data":"0c96156ca263a32dd6a9652c4b243415d18ac4f84af1e982cee89d29282773ff"} Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.764640 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c96156ca263a32dd6a9652c4b243415d18ac4f84af1e982cee89d29282773ff" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.764694 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:59:45 crc kubenswrapper[4942]: I0218 19:59:45.037370 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:45 crc kubenswrapper[4942]: E0218 19:59:45.038471 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:57 crc kubenswrapper[4942]: I0218 19:59:57.037418 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:57 crc kubenswrapper[4942]: E0218 19:59:57.038604 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.175173 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp"] Feb 18 20:00:00 crc kubenswrapper[4942]: E0218 20:00:00.176311 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea9c52a-c8f0-4189-a995-202a5a8a07db" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.176345 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea9c52a-c8f0-4189-a995-202a5a8a07db" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.177141 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea9c52a-c8f0-4189-a995-202a5a8a07db" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.178282 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.182413 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.182671 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.196023 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp"] Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.262106 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.262197 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.262231 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.363872 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.363957 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.363995 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.364947 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.370424 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.382020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.511895 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:01 crc kubenswrapper[4942]: I0218 20:00:01.079788 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp"] Feb 18 20:00:02 crc kubenswrapper[4942]: I0218 20:00:02.012296 4942 generic.go:334] "Generic (PLEG): container finished" podID="84ccdc2e-1528-43d4-9c24-42f72bfbb0de" containerID="61bb3a2b09293111d8de2349b0416e5a02bfa7aaf7424af19bf5902a23d6049e" exitCode=0 Feb 18 20:00:02 crc kubenswrapper[4942]: I0218 20:00:02.012473 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" event={"ID":"84ccdc2e-1528-43d4-9c24-42f72bfbb0de","Type":"ContainerDied","Data":"61bb3a2b09293111d8de2349b0416e5a02bfa7aaf7424af19bf5902a23d6049e"} Feb 18 20:00:02 crc kubenswrapper[4942]: I0218 20:00:02.012698 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" event={"ID":"84ccdc2e-1528-43d4-9c24-42f72bfbb0de","Type":"ContainerStarted","Data":"3f160f06bcdb4ed80d5a39638f45d7b063d7ba8757457db74483d1ab8f5566cc"} Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.322951 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.440521 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") pod \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.440566 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") pod \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.440593 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") pod \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.441925 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume" (OuterVolumeSpecName: "config-volume") pod "84ccdc2e-1528-43d4-9c24-42f72bfbb0de" (UID: "84ccdc2e-1528-43d4-9c24-42f72bfbb0de"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.446089 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt" (OuterVolumeSpecName: "kube-api-access-676gt") pod "84ccdc2e-1528-43d4-9c24-42f72bfbb0de" (UID: "84ccdc2e-1528-43d4-9c24-42f72bfbb0de"). InnerVolumeSpecName "kube-api-access-676gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.448039 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84ccdc2e-1528-43d4-9c24-42f72bfbb0de" (UID: "84ccdc2e-1528-43d4-9c24-42f72bfbb0de"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.542868 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.542920 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.542931 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.036725 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" event={"ID":"84ccdc2e-1528-43d4-9c24-42f72bfbb0de","Type":"ContainerDied","Data":"3f160f06bcdb4ed80d5a39638f45d7b063d7ba8757457db74483d1ab8f5566cc"} Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.036819 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f160f06bcdb4ed80d5a39638f45d7b063d7ba8757457db74483d1ab8f5566cc" Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.036888 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.421356 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.433099 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 20:00:05 crc kubenswrapper[4942]: I0218 20:00:05.058923 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ba4570-01bb-4964-8c1d-791c25d72a1a" path="/var/lib/kubelet/pods/01ba4570-01bb-4964-8c1d-791c25d72a1a/volumes" Feb 18 20:00:08 crc kubenswrapper[4942]: I0218 20:00:08.036757 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:00:08 crc kubenswrapper[4942]: E0218 20:00:08.037274 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:19 crc kubenswrapper[4942]: I0218 20:00:19.036867 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:00:19 crc kubenswrapper[4942]: E0218 20:00:19.038167 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.057697 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.058526 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="prometheus" containerID="cri-o://3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" gracePeriod=600 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.058610 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="thanos-sidecar" containerID="cri-o://d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" gracePeriod=600 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.058671 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="config-reloader" containerID="cri-o://188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" gracePeriod=600 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.311892 4942 generic.go:334] "Generic (PLEG): container finished" podID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerID="d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" exitCode=0 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.312211 4942 generic.go:334] "Generic (PLEG): container finished" podID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerID="3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" exitCode=0 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.312240 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98"} Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.312273 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845"} Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.187836 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.313847 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.313932 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.313962 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314645 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314746 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314814 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314876 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314927 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314962 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315010 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315087 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315120 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315153 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314868 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315517 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315641 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.316281 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.316307 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.316318 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.321542 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out" (OuterVolumeSpecName: "config-out") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.323404 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.323491 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.323790 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z" (OuterVolumeSpecName: "kube-api-access-s685z") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "kube-api-access-s685z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.323934 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.324026 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.324105 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config" (OuterVolumeSpecName: "config") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.328340 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336582 4942 generic.go:334] "Generic (PLEG): container finished" podID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerID="188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" exitCode=0 Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336633 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109"} Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336670 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"60c6687648dd41b94a4225ed03866cf4c665cec18c0eb5d84fcb09f0dbc7012b"} Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336695 4942 scope.go:117] "RemoveContainer" containerID="d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336711 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.343566 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.387749 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config" (OuterVolumeSpecName: "web-config") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418551 4942 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418599 4942 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418614 4942 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418631 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418676 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") on node \"crc\" " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418691 4942 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418705 4942 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418719 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418731 4942 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418746 4942 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.456431 4942 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.456813 4942 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5") on node "crc" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.500905 4942 scope.go:117] "RemoveContainer" containerID="188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.519588 4942 reconciler_common.go:293] "Volume detached for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.526407 4942 scope.go:117] "RemoveContainer" containerID="3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.571041 4942 scope.go:117] "RemoveContainer" containerID="7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.604616 4942 scope.go:117] "RemoveContainer" containerID="d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.605163 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98\": container with ID starting with d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98 not found: ID does not exist" containerID="d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.605227 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98"} err="failed to get container status \"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98\": rpc error: code = NotFound desc = could not find container \"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98\": container with ID starting with d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98 not found: ID does not exist" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.605262 4942 scope.go:117] "RemoveContainer" containerID="188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.605594 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109\": container with ID starting with 188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109 not found: ID does not exist" containerID="188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.605629 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109"} err="failed to get container status \"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109\": rpc error: code = NotFound desc = could not find container \"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109\": container with ID starting with 188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109 not found: ID does not exist" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.605652 4942 scope.go:117] "RemoveContainer" containerID="3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.606313 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845\": container with ID starting with 3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845 not found: ID does not exist" containerID="3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.606345 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845"} err="failed to get container status \"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845\": rpc error: code = NotFound desc = could not find container \"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845\": container with ID starting with 3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845 not found: ID does not exist" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.606367 4942 scope.go:117] "RemoveContainer" containerID="7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.606660 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067\": container with ID starting with 7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067 not found: ID does not exist" containerID="7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.606729 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067"} err="failed to get container status \"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067\": rpc error: code = NotFound desc = could not find container \"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067\": container with ID starting with 7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067 not found: ID does not exist" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.693509 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.713447 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.730834 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731225 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="init-config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731241 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="init-config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731257 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="thanos-sidecar" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731264 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="thanos-sidecar" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731274 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ccdc2e-1528-43d4-9c24-42f72bfbb0de" containerName="collect-profiles" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731281 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ccdc2e-1528-43d4-9c24-42f72bfbb0de" containerName="collect-profiles" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731300 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="prometheus" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731305 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="prometheus" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731313 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731319 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731500 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="prometheus" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731513 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ccdc2e-1528-43d4-9c24-42f72bfbb0de" containerName="collect-profiles" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731528 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="thanos-sidecar" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731539 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.745707 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.745832 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.747820 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748107 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748294 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748498 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7f4m2" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748783 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748916 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.749121 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.755385 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.925549 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.925856 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.925973 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926089 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926170 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926251 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926338 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926418 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926510 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926597 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926741 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926916 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.927058 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjld\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-kube-api-access-5xjld\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029041 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xjld\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-kube-api-access-5xjld\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029520 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029572 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029639 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029798 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029867 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029918 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029982 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030014 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030065 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030112 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030221 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030302 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030718 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.031201 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.031641 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.034066 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.036065 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.037183 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.039991 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.039995 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.042388 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.044165 4942 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.044219 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70b345b463ff13ff33bce45da0f4a8796a1574afa2d8fd2ecf4f2239b34767fb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.046891 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.049743 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" path="/var/lib/kubelet/pods/219b2aa4-0497-40f8-a3d0-947d37be720d/volumes" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.052168 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.061787 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xjld\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-kube-api-access-5xjld\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.085981 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.361525 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.918531 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:28 crc kubenswrapper[4942]: I0218 20:00:28.361674 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"545326c73b65ccb8a02bb743e6b1e4d065270e3279a3f53424d256e608ba6aea"} Feb 18 20:00:32 crc kubenswrapper[4942]: I0218 20:00:32.036364 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:00:32 crc kubenswrapper[4942]: E0218 20:00:32.037318 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:33 crc kubenswrapper[4942]: I0218 20:00:33.427538 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"fd92dca173e7de10a9954c7748a19c58e83ce614f9318931ad2724cfd5ccc508"} Feb 18 20:00:42 crc kubenswrapper[4942]: I0218 20:00:42.529720 4942 generic.go:334] "Generic (PLEG): container finished" podID="3ddfc3cc-08ad-436c-b5e9-0ab2ee325555" containerID="fd92dca173e7de10a9954c7748a19c58e83ce614f9318931ad2724cfd5ccc508" exitCode=0 Feb 18 20:00:42 crc kubenswrapper[4942]: I0218 20:00:42.529874 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerDied","Data":"fd92dca173e7de10a9954c7748a19c58e83ce614f9318931ad2724cfd5ccc508"} Feb 18 20:00:43 crc kubenswrapper[4942]: I0218 20:00:43.545543 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"add4033ed1184099f2d7277281b19211db0936b4f10bedf77a4a511bca20b42e"} Feb 18 20:00:45 crc kubenswrapper[4942]: I0218 20:00:45.037156 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:00:45 crc kubenswrapper[4942]: E0218 20:00:45.037811 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:46 crc kubenswrapper[4942]: I0218 20:00:46.576910 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"35d9b0bb66a5a8bb7a2fdaaab5c93c11198c98a6c020e68354d41a73c50bc2c4"} Feb 18 20:00:46 crc kubenswrapper[4942]: I0218 20:00:46.577569 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"c21b89f287762670936c49e3c845de86b6fab5771fb13375c928cd0141b4bdca"} Feb 18 20:00:46 crc kubenswrapper[4942]: I0218 20:00:46.618186 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.617758131 podStartE2EDuration="20.617758131s" podCreationTimestamp="2026-02-18 20:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:00:46.608826545 +0000 UTC m=+2606.313759220" watchObservedRunningTime="2026-02-18 20:00:46.617758131 +0000 UTC m=+2606.322690806" Feb 18 20:00:47 crc kubenswrapper[4942]: I0218 20:00:47.362219 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:49 crc kubenswrapper[4942]: I0218 20:00:49.749110 4942 scope.go:117] "RemoveContainer" containerID="01241740eda1e01b1148596092553039bc8d0f4fa82bfe1851e652e1a9db2c10" Feb 18 20:00:49 crc kubenswrapper[4942]: I0218 20:00:49.789793 4942 scope.go:117] "RemoveContainer" containerID="e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec" Feb 18 20:00:49 crc kubenswrapper[4942]: I0218 20:00:49.822890 4942 scope.go:117] "RemoveContainer" containerID="fced62a823aabc9eb96d7dc1c21c39c26f67347f087ea0b1c45827cef7157377" Feb 18 20:00:49 crc kubenswrapper[4942]: I0218 20:00:49.854006 4942 scope.go:117] "RemoveContainer" containerID="5fb82fb77a7895a43a30ace42481cf4c1da624e8742b15c1cb5a5cf3044d7c22" Feb 18 20:00:57 crc kubenswrapper[4942]: I0218 20:00:57.363278 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:57 crc kubenswrapper[4942]: I0218 20:00:57.374982 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:57 crc kubenswrapper[4942]: I0218 20:00:57.695018 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.036391 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:00 crc kubenswrapper[4942]: E0218 20:01:00.036916 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.151543 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524081-m78nz"] Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.152957 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.171424 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-m78nz"] Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.269164 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.269317 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.269349 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.269371 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.371232 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.371383 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.371416 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.371445 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.378488 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.379862 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.381222 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.394955 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.473250 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.912213 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-m78nz"] Feb 18 20:01:01 crc kubenswrapper[4942]: I0218 20:01:01.726021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-m78nz" event={"ID":"2bd5363d-fb40-4123-b9bb-5e6179d65b44","Type":"ContainerStarted","Data":"028bdad6670299f79ade249de30440c67f798800431953939dfd578e4bc4642c"} Feb 18 20:01:01 crc kubenswrapper[4942]: I0218 20:01:01.726341 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-m78nz" event={"ID":"2bd5363d-fb40-4123-b9bb-5e6179d65b44","Type":"ContainerStarted","Data":"2ff9040fd76ba76041a0011c39da104db97678efd87e14759f6c3866c30d61ee"} Feb 18 20:01:01 crc kubenswrapper[4942]: I0218 20:01:01.765727 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524081-m78nz" podStartSLOduration=1.765705041 podStartE2EDuration="1.765705041s" podCreationTimestamp="2026-02-18 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:01:01.763654737 +0000 UTC m=+2621.468587402" watchObservedRunningTime="2026-02-18 20:01:01.765705041 +0000 UTC m=+2621.470637706" Feb 18 20:01:03 crc kubenswrapper[4942]: I0218 20:01:03.749674 4942 generic.go:334] "Generic (PLEG): container finished" podID="2bd5363d-fb40-4123-b9bb-5e6179d65b44" containerID="028bdad6670299f79ade249de30440c67f798800431953939dfd578e4bc4642c" exitCode=0 Feb 18 20:01:03 crc kubenswrapper[4942]: I0218 20:01:03.749822 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-m78nz" event={"ID":"2bd5363d-fb40-4123-b9bb-5e6179d65b44","Type":"ContainerDied","Data":"028bdad6670299f79ade249de30440c67f798800431953939dfd578e4bc4642c"} Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.158299 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.284209 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") pod \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.284423 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") pod \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.284497 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") pod \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.284624 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") pod \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.290147 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl" (OuterVolumeSpecName: "kube-api-access-kbxkl") pod "2bd5363d-fb40-4123-b9bb-5e6179d65b44" (UID: "2bd5363d-fb40-4123-b9bb-5e6179d65b44"). InnerVolumeSpecName "kube-api-access-kbxkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.290924 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2bd5363d-fb40-4123-b9bb-5e6179d65b44" (UID: "2bd5363d-fb40-4123-b9bb-5e6179d65b44"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.315064 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bd5363d-fb40-4123-b9bb-5e6179d65b44" (UID: "2bd5363d-fb40-4123-b9bb-5e6179d65b44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.352123 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data" (OuterVolumeSpecName: "config-data") pod "2bd5363d-fb40-4123-b9bb-5e6179d65b44" (UID: "2bd5363d-fb40-4123-b9bb-5e6179d65b44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.386919 4942 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.386960 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.386974 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.386987 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.772134 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-m78nz" event={"ID":"2bd5363d-fb40-4123-b9bb-5e6179d65b44","Type":"ContainerDied","Data":"2ff9040fd76ba76041a0011c39da104db97678efd87e14759f6c3866c30d61ee"} Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.772350 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ff9040fd76ba76041a0011c39da104db97678efd87e14759f6c3866c30d61ee" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.772400 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:14 crc kubenswrapper[4942]: I0218 20:01:14.037548 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:14 crc kubenswrapper[4942]: E0218 20:01:14.038272 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.164849 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:01:19 crc kubenswrapper[4942]: E0218 20:01:19.165709 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd5363d-fb40-4123-b9bb-5e6179d65b44" containerName="keystone-cron" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.165722 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd5363d-fb40-4123-b9bb-5e6179d65b44" containerName="keystone-cron" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.165960 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd5363d-fb40-4123-b9bb-5e6179d65b44" containerName="keystone-cron" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.166625 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.169568 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.170015 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.170246 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fgwq4" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.171563 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.186088 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.322967 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323026 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323148 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323211 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323306 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323546 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424821 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424868 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424898 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424921 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424941 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424976 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424994 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425039 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425112 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425289 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425325 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425867 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425903 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.426969 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.429827 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.429950 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.433136 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.450555 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.457255 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.488904 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.936731 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.957963 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.981855 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"498a3ae0-adb2-4729-a2eb-78e267e1613b","Type":"ContainerStarted","Data":"4638cc0d3971f910691e7e7ad60b86d01493160078b4f86a07d3570748f42e2f"} Feb 18 20:01:26 crc kubenswrapper[4942]: I0218 20:01:26.035776 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:26 crc kubenswrapper[4942]: E0218 20:01:26.036696 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:01:31 crc kubenswrapper[4942]: I0218 20:01:31.034932 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 20:01:32 crc kubenswrapper[4942]: I0218 20:01:32.133534 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"498a3ae0-adb2-4729-a2eb-78e267e1613b","Type":"ContainerStarted","Data":"169b9c7b6b3a31907bbb5568c6300b81731785a07744ed74ff40a7d3cf050f29"} Feb 18 20:01:32 crc kubenswrapper[4942]: I0218 20:01:32.189018 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.118768519 podStartE2EDuration="14.188994483s" podCreationTimestamp="2026-02-18 20:01:18 +0000 UTC" firstStartedPulling="2026-02-18 20:01:19.957603691 +0000 UTC m=+2639.662536366" lastFinishedPulling="2026-02-18 20:01:31.027829625 +0000 UTC m=+2650.732762330" observedRunningTime="2026-02-18 20:01:32.166921968 +0000 UTC m=+2651.871854643" watchObservedRunningTime="2026-02-18 20:01:32.188994483 +0000 UTC m=+2651.893927158" Feb 18 20:01:38 crc kubenswrapper[4942]: I0218 20:01:38.035979 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:38 crc kubenswrapper[4942]: E0218 20:01:38.036828 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:01:49 crc kubenswrapper[4942]: I0218 20:01:49.037097 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:49 crc kubenswrapper[4942]: E0218 20:01:49.038014 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:02:03 crc kubenswrapper[4942]: I0218 20:02:03.035691 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:02:03 crc kubenswrapper[4942]: E0218 20:02:03.037647 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:02:18 crc kubenswrapper[4942]: I0218 20:02:18.036177 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:02:18 crc kubenswrapper[4942]: E0218 20:02:18.036946 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.704652 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.707537 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.725511 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.851262 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.851753 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.851931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.953990 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.954082 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.954589 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.954707 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.955203 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.984292 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:27 crc kubenswrapper[4942]: I0218 20:02:27.036384 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:27 crc kubenswrapper[4942]: I0218 20:02:27.562551 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:27 crc kubenswrapper[4942]: I0218 20:02:27.771240 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerStarted","Data":"489436400d82be93fb769cbcdd5323663ef8c990df5f4e0eb67e5cdeeade6085"} Feb 18 20:02:28 crc kubenswrapper[4942]: I0218 20:02:28.789627 4942 generic.go:334] "Generic (PLEG): container finished" podID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerID="3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb" exitCode=0 Feb 18 20:02:28 crc kubenswrapper[4942]: I0218 20:02:28.789692 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerDied","Data":"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb"} Feb 18 20:02:29 crc kubenswrapper[4942]: I0218 20:02:29.035938 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:02:29 crc kubenswrapper[4942]: I0218 20:02:29.800954 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerStarted","Data":"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370"} Feb 18 20:02:29 crc kubenswrapper[4942]: I0218 20:02:29.804414 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2"} Feb 18 20:02:34 crc kubenswrapper[4942]: I0218 20:02:34.852954 4942 generic.go:334] "Generic (PLEG): container finished" podID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerID="3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370" exitCode=0 Feb 18 20:02:34 crc kubenswrapper[4942]: I0218 20:02:34.853019 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerDied","Data":"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370"} Feb 18 20:02:35 crc kubenswrapper[4942]: I0218 20:02:35.866108 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerStarted","Data":"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704"} Feb 18 20:02:35 crc kubenswrapper[4942]: I0218 20:02:35.888342 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v69dv" podStartSLOduration=3.425524291 podStartE2EDuration="9.888323736s" podCreationTimestamp="2026-02-18 20:02:26 +0000 UTC" firstStartedPulling="2026-02-18 20:02:28.793179841 +0000 UTC m=+2708.498112506" lastFinishedPulling="2026-02-18 20:02:35.255979266 +0000 UTC m=+2714.960911951" observedRunningTime="2026-02-18 20:02:35.886898708 +0000 UTC m=+2715.591831373" watchObservedRunningTime="2026-02-18 20:02:35.888323736 +0000 UTC m=+2715.593256401" Feb 18 20:02:37 crc kubenswrapper[4942]: I0218 20:02:37.047602 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:37 crc kubenswrapper[4942]: I0218 20:02:37.047928 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:38 crc kubenswrapper[4942]: I0218 20:02:38.098874 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v69dv" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" probeResult="failure" output=< Feb 18 20:02:38 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:02:38 crc kubenswrapper[4942]: > Feb 18 20:02:47 crc kubenswrapper[4942]: I0218 20:02:47.098398 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:47 crc kubenswrapper[4942]: I0218 20:02:47.153698 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:47 crc kubenswrapper[4942]: I0218 20:02:47.335098 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:48 crc kubenswrapper[4942]: I0218 20:02:48.994909 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v69dv" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" containerID="cri-o://747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" gracePeriod=2 Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.547380 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.652914 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") pod \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.653023 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") pod \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.653114 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") pod \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.653976 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities" (OuterVolumeSpecName: "utilities") pod "0f306d5c-e9fd-4d66-babc-d5812662f2c6" (UID: "0f306d5c-e9fd-4d66-babc-d5812662f2c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.661413 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr" (OuterVolumeSpecName: "kube-api-access-m8kwr") pod "0f306d5c-e9fd-4d66-babc-d5812662f2c6" (UID: "0f306d5c-e9fd-4d66-babc-d5812662f2c6"). InnerVolumeSpecName "kube-api-access-m8kwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.756837 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.756886 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.815688 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f306d5c-e9fd-4d66-babc-d5812662f2c6" (UID: "0f306d5c-e9fd-4d66-babc-d5812662f2c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.859103 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.007954 4942 generic.go:334] "Generic (PLEG): container finished" podID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerID="747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" exitCode=0 Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.008004 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerDied","Data":"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704"} Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.008034 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerDied","Data":"489436400d82be93fb769cbcdd5323663ef8c990df5f4e0eb67e5cdeeade6085"} Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.008054 4942 scope.go:117] "RemoveContainer" containerID="747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.008177 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.036530 4942 scope.go:117] "RemoveContainer" containerID="3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.052882 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.060973 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.072060 4942 scope.go:117] "RemoveContainer" containerID="3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.135065 4942 scope.go:117] "RemoveContainer" containerID="747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" Feb 18 20:02:50 crc kubenswrapper[4942]: E0218 20:02:50.135473 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704\": container with ID starting with 747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704 not found: ID does not exist" containerID="747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.135513 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704"} err="failed to get container status \"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704\": rpc error: code = NotFound desc = could not find container \"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704\": container with ID starting with 747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704 not found: ID does not exist" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.135539 4942 scope.go:117] "RemoveContainer" containerID="3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370" Feb 18 20:02:50 crc kubenswrapper[4942]: E0218 20:02:50.136132 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370\": container with ID starting with 3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370 not found: ID does not exist" containerID="3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.136193 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370"} err="failed to get container status \"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370\": rpc error: code = NotFound desc = could not find container \"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370\": container with ID starting with 3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370 not found: ID does not exist" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.136224 4942 scope.go:117] "RemoveContainer" containerID="3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb" Feb 18 20:02:50 crc kubenswrapper[4942]: E0218 20:02:50.136483 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb\": container with ID starting with 3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb not found: ID does not exist" containerID="3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.136507 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb"} err="failed to get container status \"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb\": rpc error: code = NotFound desc = could not find container \"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb\": container with ID starting with 3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb not found: ID does not exist" Feb 18 20:02:51 crc kubenswrapper[4942]: I0218 20:02:51.054614 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" path="/var/lib/kubelet/pods/0f306d5c-e9fd-4d66-babc-d5812662f2c6/volumes" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.956827 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:08 crc kubenswrapper[4942]: E0218 20:04:08.957815 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="extract-content" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.957955 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="extract-content" Feb 18 20:04:08 crc kubenswrapper[4942]: E0218 20:04:08.957981 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.958015 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" Feb 18 20:04:08 crc kubenswrapper[4942]: E0218 20:04:08.958052 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="extract-utilities" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.958061 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="extract-utilities" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.958501 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.961029 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.983428 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.008904 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.009007 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.009046 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.110919 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.111158 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.111263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.112193 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.112256 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.139955 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.291902 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.777129 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.853274 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerStarted","Data":"7c8900315c93c1b686c84d136b2ab2bcaad574034c6670f39b754103e2492749"} Feb 18 20:04:10 crc kubenswrapper[4942]: I0218 20:04:10.866280 4942 generic.go:334] "Generic (PLEG): container finished" podID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerID="4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6" exitCode=0 Feb 18 20:04:10 crc kubenswrapper[4942]: I0218 20:04:10.866376 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerDied","Data":"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6"} Feb 18 20:04:12 crc kubenswrapper[4942]: I0218 20:04:12.895726 4942 generic.go:334] "Generic (PLEG): container finished" podID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerID="719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66" exitCode=0 Feb 18 20:04:12 crc kubenswrapper[4942]: I0218 20:04:12.895865 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerDied","Data":"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66"} Feb 18 20:04:13 crc kubenswrapper[4942]: I0218 20:04:13.917382 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerStarted","Data":"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a"} Feb 18 20:04:13 crc kubenswrapper[4942]: I0218 20:04:13.948628 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rm4qf" podStartSLOduration=3.513543258 podStartE2EDuration="5.948600679s" podCreationTimestamp="2026-02-18 20:04:08 +0000 UTC" firstStartedPulling="2026-02-18 20:04:10.869584401 +0000 UTC m=+2810.574517116" lastFinishedPulling="2026-02-18 20:04:13.304641852 +0000 UTC m=+2813.009574537" observedRunningTime="2026-02-18 20:04:13.943644028 +0000 UTC m=+2813.648576703" watchObservedRunningTime="2026-02-18 20:04:13.948600679 +0000 UTC m=+2813.653533344" Feb 18 20:04:19 crc kubenswrapper[4942]: I0218 20:04:19.292962 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:19 crc kubenswrapper[4942]: I0218 20:04:19.293714 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:19 crc kubenswrapper[4942]: I0218 20:04:19.378852 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:20 crc kubenswrapper[4942]: I0218 20:04:20.062265 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:20 crc kubenswrapper[4942]: I0218 20:04:20.123205 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:21 crc kubenswrapper[4942]: I0218 20:04:21.999133 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rm4qf" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="registry-server" containerID="cri-o://a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" gracePeriod=2 Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.535982 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.601410 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") pod \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.603974 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") pod \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.604174 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") pod \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.605897 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities" (OuterVolumeSpecName: "utilities") pod "df296b06-0ec2-4b9b-bf0c-f93f98b2f928" (UID: "df296b06-0ec2-4b9b-bf0c-f93f98b2f928"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.611592 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj" (OuterVolumeSpecName: "kube-api-access-qwdkj") pod "df296b06-0ec2-4b9b-bf0c-f93f98b2f928" (UID: "df296b06-0ec2-4b9b-bf0c-f93f98b2f928"). InnerVolumeSpecName "kube-api-access-qwdkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.629241 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df296b06-0ec2-4b9b-bf0c-f93f98b2f928" (UID: "df296b06-0ec2-4b9b-bf0c-f93f98b2f928"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.707423 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.707454 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.707464 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.008826 4942 generic.go:334] "Generic (PLEG): container finished" podID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerID="a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" exitCode=0 Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.008938 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerDied","Data":"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a"} Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.009617 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerDied","Data":"7c8900315c93c1b686c84d136b2ab2bcaad574034c6670f39b754103e2492749"} Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.008950 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.009679 4942 scope.go:117] "RemoveContainer" containerID="a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.029834 4942 scope.go:117] "RemoveContainer" containerID="719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.048282 4942 scope.go:117] "RemoveContainer" containerID="4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.076326 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.089986 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.127115 4942 scope.go:117] "RemoveContainer" containerID="a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" Feb 18 20:04:23 crc kubenswrapper[4942]: E0218 20:04:23.127706 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a\": container with ID starting with a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a not found: ID does not exist" containerID="a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.127816 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a"} err="failed to get container status \"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a\": rpc error: code = NotFound desc = could not find container \"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a\": container with ID starting with a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a not found: ID does not exist" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.127858 4942 scope.go:117] "RemoveContainer" containerID="719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66" Feb 18 20:04:23 crc kubenswrapper[4942]: E0218 20:04:23.130472 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66\": container with ID starting with 719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66 not found: ID does not exist" containerID="719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.132098 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66"} err="failed to get container status \"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66\": rpc error: code = NotFound desc = could not find container \"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66\": container with ID starting with 719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66 not found: ID does not exist" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.132165 4942 scope.go:117] "RemoveContainer" containerID="4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6" Feb 18 20:04:23 crc kubenswrapper[4942]: E0218 20:04:23.133993 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6\": container with ID starting with 4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6 not found: ID does not exist" containerID="4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.134057 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6"} err="failed to get container status \"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6\": rpc error: code = NotFound desc = could not find container \"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6\": container with ID starting with 4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6 not found: ID does not exist" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.050874 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" path="/var/lib/kubelet/pods/df296b06-0ec2-4b9b-bf0c-f93f98b2f928/volumes" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.990271 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:25 crc kubenswrapper[4942]: E0218 20:04:25.990851 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="extract-utilities" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.990879 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="extract-utilities" Feb 18 20:04:25 crc kubenswrapper[4942]: E0218 20:04:25.990908 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="extract-content" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.990920 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="extract-content" Feb 18 20:04:25 crc kubenswrapper[4942]: E0218 20:04:25.990958 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="registry-server" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.990971 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="registry-server" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.991296 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="registry-server" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.994002 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.012111 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.085621 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.085731 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.085774 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.187226 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.187297 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.187325 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.188091 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.188119 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.207612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.336074 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.824906 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:27 crc kubenswrapper[4942]: I0218 20:04:27.063119 4942 generic.go:334] "Generic (PLEG): container finished" podID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerID="8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e" exitCode=0 Feb 18 20:04:27 crc kubenswrapper[4942]: I0218 20:04:27.063182 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerDied","Data":"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e"} Feb 18 20:04:27 crc kubenswrapper[4942]: I0218 20:04:27.063217 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerStarted","Data":"d3befc2b3f5841889d2c50989a675ede20bb79cfd1a022cebe42c3897bfc202a"} Feb 18 20:04:29 crc kubenswrapper[4942]: I0218 20:04:29.085084 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerStarted","Data":"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7"} Feb 18 20:04:30 crc kubenswrapper[4942]: I0218 20:04:30.097361 4942 generic.go:334] "Generic (PLEG): container finished" podID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerID="ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7" exitCode=0 Feb 18 20:04:30 crc kubenswrapper[4942]: I0218 20:04:30.097422 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerDied","Data":"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7"} Feb 18 20:04:31 crc kubenswrapper[4942]: I0218 20:04:31.110703 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerStarted","Data":"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631"} Feb 18 20:04:31 crc kubenswrapper[4942]: I0218 20:04:31.127167 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jfgwd" podStartSLOduration=2.432641881 podStartE2EDuration="6.127147353s" podCreationTimestamp="2026-02-18 20:04:25 +0000 UTC" firstStartedPulling="2026-02-18 20:04:27.065104242 +0000 UTC m=+2826.770036907" lastFinishedPulling="2026-02-18 20:04:30.759609724 +0000 UTC m=+2830.464542379" observedRunningTime="2026-02-18 20:04:31.125171411 +0000 UTC m=+2830.830104086" watchObservedRunningTime="2026-02-18 20:04:31.127147353 +0000 UTC m=+2830.832080028" Feb 18 20:04:36 crc kubenswrapper[4942]: I0218 20:04:36.336799 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:36 crc kubenswrapper[4942]: I0218 20:04:36.337483 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:36 crc kubenswrapper[4942]: I0218 20:04:36.389069 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:37 crc kubenswrapper[4942]: I0218 20:04:37.239745 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:37 crc kubenswrapper[4942]: I0218 20:04:37.289013 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.207321 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jfgwd" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="registry-server" containerID="cri-o://b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" gracePeriod=2 Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.726572 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.872472 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") pod \"3f7c8faf-9df9-40e0-83c7-8fb987985673\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.872709 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") pod \"3f7c8faf-9df9-40e0-83c7-8fb987985673\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.872857 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") pod \"3f7c8faf-9df9-40e0-83c7-8fb987985673\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.873626 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities" (OuterVolumeSpecName: "utilities") pod "3f7c8faf-9df9-40e0-83c7-8fb987985673" (UID: "3f7c8faf-9df9-40e0-83c7-8fb987985673"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.881160 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn" (OuterVolumeSpecName: "kube-api-access-lx7pn") pod "3f7c8faf-9df9-40e0-83c7-8fb987985673" (UID: "3f7c8faf-9df9-40e0-83c7-8fb987985673"). InnerVolumeSpecName "kube-api-access-lx7pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.925576 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f7c8faf-9df9-40e0-83c7-8fb987985673" (UID: "3f7c8faf-9df9-40e0-83c7-8fb987985673"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.975540 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.975585 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.975600 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.232988 4942 generic.go:334] "Generic (PLEG): container finished" podID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerID="b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" exitCode=0 Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.233081 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerDied","Data":"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631"} Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.233197 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerDied","Data":"d3befc2b3f5841889d2c50989a675ede20bb79cfd1a022cebe42c3897bfc202a"} Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.233270 4942 scope.go:117] "RemoveContainer" containerID="b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.233119 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.285017 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.300612 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.302443 4942 scope.go:117] "RemoveContainer" containerID="ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.337035 4942 scope.go:117] "RemoveContainer" containerID="8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.402663 4942 scope.go:117] "RemoveContainer" containerID="b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" Feb 18 20:04:40 crc kubenswrapper[4942]: E0218 20:04:40.403185 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631\": container with ID starting with b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631 not found: ID does not exist" containerID="b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403218 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631"} err="failed to get container status \"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631\": rpc error: code = NotFound desc = could not find container \"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631\": container with ID starting with b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631 not found: ID does not exist" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403240 4942 scope.go:117] "RemoveContainer" containerID="ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7" Feb 18 20:04:40 crc kubenswrapper[4942]: E0218 20:04:40.403551 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7\": container with ID starting with ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7 not found: ID does not exist" containerID="ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403598 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7"} err="failed to get container status \"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7\": rpc error: code = NotFound desc = could not find container \"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7\": container with ID starting with ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7 not found: ID does not exist" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403636 4942 scope.go:117] "RemoveContainer" containerID="8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e" Feb 18 20:04:40 crc kubenswrapper[4942]: E0218 20:04:40.403960 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e\": container with ID starting with 8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e not found: ID does not exist" containerID="8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403995 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e"} err="failed to get container status \"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e\": rpc error: code = NotFound desc = could not find container \"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e\": container with ID starting with 8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e not found: ID does not exist" Feb 18 20:04:41 crc kubenswrapper[4942]: I0218 20:04:41.055541 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" path="/var/lib/kubelet/pods/3f7c8faf-9df9-40e0-83c7-8fb987985673/volumes" Feb 18 20:04:53 crc kubenswrapper[4942]: I0218 20:04:53.740434 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:04:53 crc kubenswrapper[4942]: I0218 20:04:53.741073 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:05:23 crc kubenswrapper[4942]: I0218 20:05:23.741489 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:05:23 crc kubenswrapper[4942]: I0218 20:05:23.742076 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.741248 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.741935 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.741998 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.742870 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.742937 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2" gracePeriod=600 Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.994047 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2" exitCode=0 Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.994136 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2"} Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.994448 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:05:55 crc kubenswrapper[4942]: I0218 20:05:55.005626 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495"} Feb 18 20:07:31 crc kubenswrapper[4942]: I0218 20:07:31.772298 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 18 20:07:31 crc kubenswrapper[4942]: I0218 20:07:31.772299 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 20:07:31 crc kubenswrapper[4942]: I0218 20:07:31.994971 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.186:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 20:07:36 crc kubenswrapper[4942]: I0218 20:07:36.773133 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 20:07:37 crc kubenswrapper[4942]: I0218 20:07:37.039009 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.186:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 20:07:41 crc kubenswrapper[4942]: I0218 20:07:41.772046 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 20:07:41 crc kubenswrapper[4942]: I0218 20:07:41.772653 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 18 20:07:41 crc kubenswrapper[4942]: I0218 20:07:41.773726 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 18 20:07:41 crc kubenswrapper[4942]: I0218 20:07:41.773824 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-central-agent" containerID="cri-o://724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3" gracePeriod=30 Feb 18 20:07:42 crc kubenswrapper[4942]: I0218 20:07:42.083974 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.186:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 20:07:42 crc kubenswrapper[4942]: I0218 20:07:42.084064 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 20:07:42 crc kubenswrapper[4942]: I0218 20:07:42.085357 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 18 20:07:42 crc kubenswrapper[4942]: I0218 20:07:42.085434 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerName="cinder-scheduler" containerID="cri-o://a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560" gracePeriod=30 Feb 18 20:08:01 crc kubenswrapper[4942]: I0218 20:08:01.769035 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 18 20:08:20 crc kubenswrapper[4942]: I0218 20:08:20.223365 4942 generic.go:334] "Generic (PLEG): container finished" podID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerID="a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560" exitCode=-1 Feb 18 20:08:20 crc kubenswrapper[4942]: I0218 20:08:20.223476 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerDied","Data":"a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560"} Feb 18 20:08:23 crc kubenswrapper[4942]: I0218 20:08:23.741327 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:08:23 crc kubenswrapper[4942]: I0218 20:08:23.742116 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:08:24 crc kubenswrapper[4942]: I0218 20:08:24.891483 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" podUID="3b42f10c-a162-4d74-9eed-b6c3ef08cdb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": dial tcp 10.217.0.87:8081: connect: connection refused" Feb 18 20:08:24 crc kubenswrapper[4942]: I0218 20:08:24.891577 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" podUID="3b42f10c-a162-4d74-9eed-b6c3ef08cdb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Feb 18 20:08:26 crc kubenswrapper[4942]: I0218 20:08:26.737731 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-notification-agent" probeResult="failure" output=< Feb 18 20:08:26 crc kubenswrapper[4942]: Unkown error: Expecting value: line 1 column 1 (char 0) Feb 18 20:08:26 crc kubenswrapper[4942]: > Feb 18 20:08:26 crc kubenswrapper[4942]: I0218 20:08:26.738125 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 18 20:08:28 crc kubenswrapper[4942]: I0218 20:08:28.312471 4942 generic.go:334] "Generic (PLEG): container finished" podID="3b42f10c-a162-4d74-9eed-b6c3ef08cdb7" containerID="b0e5cc17d5708a2bf67f2c62fdedb963fde1c3e9e426935ccb4895be0efefc73" exitCode=1 Feb 18 20:08:28 crc kubenswrapper[4942]: I0218 20:08:28.314151 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" event={"ID":"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7","Type":"ContainerDied","Data":"b0e5cc17d5708a2bf67f2c62fdedb963fde1c3e9e426935ccb4895be0efefc73"} Feb 18 20:08:28 crc kubenswrapper[4942]: I0218 20:08:28.314997 4942 scope.go:117] "RemoveContainer" containerID="b0e5cc17d5708a2bf67f2c62fdedb963fde1c3e9e426935ccb4895be0efefc73" Feb 18 20:08:29 crc kubenswrapper[4942]: I0218 20:08:29.325134 4942 generic.go:334] "Generic (PLEG): container finished" podID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerID="724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3" exitCode=137 Feb 18 20:08:29 crc kubenswrapper[4942]: I0218 20:08:29.325220 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerDied","Data":"724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3"} Feb 18 20:08:29 crc kubenswrapper[4942]: I0218 20:08:29.433482 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.336710 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"9fe9e0aa37767ce5de90121ee990ec21b503a4213a1d4d290cce06cc587867b8"} Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.337697 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-notification-agent" containerStatusID={"Type":"cri-o","ID":"532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-notification-agent failed liveness probe, will be restarted" Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.337799 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-notification-agent" containerID="cri-o://532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60" gracePeriod=30 Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.339643 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerStarted","Data":"2dda9acf7c5f07d65a720caa052bdd40927e1bef6f72f788b3ad1623f5768a13"} Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.342473 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" event={"ID":"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7","Type":"ContainerStarted","Data":"9429fdb2b3dba638af08226edd5a7591b18b05408131768a1f55e256068987cc"} Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.342733 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.954433 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 20:08:33 crc kubenswrapper[4942]: I0218 20:08:33.397151 4942 generic.go:334] "Generic (PLEG): container finished" podID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerID="532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60" exitCode=0 Feb 18 20:08:33 crc kubenswrapper[4942]: I0218 20:08:33.398580 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerDied","Data":"532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60"} Feb 18 20:08:33 crc kubenswrapper[4942]: I0218 20:08:33.398648 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"e5443cb0e3f421192e8c5cff2ac8b62d842802120d6ff0cd27e163c42866d441"} Feb 18 20:08:34 crc kubenswrapper[4942]: I0218 20:08:34.891818 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 20:08:35 crc kubenswrapper[4942]: I0218 20:08:35.967735 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 20:08:53 crc kubenswrapper[4942]: I0218 20:08:53.740449 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:08:53 crc kubenswrapper[4942]: I0218 20:08:53.740963 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.741175 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.744167 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.744513 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.746054 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.746427 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" gracePeriod=600 Feb 18 20:09:23 crc kubenswrapper[4942]: E0218 20:09:23.873157 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.888411 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" exitCode=0 Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.888506 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495"} Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.888847 4942 scope.go:117] "RemoveContainer" containerID="339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.889838 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:09:23 crc kubenswrapper[4942]: E0218 20:09:23.890231 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:09:37 crc kubenswrapper[4942]: I0218 20:09:37.039012 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:09:37 crc kubenswrapper[4942]: E0218 20:09:37.040432 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.728139 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:09:46 crc kubenswrapper[4942]: E0218 20:09:46.729021 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="extract-utilities" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.729033 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="extract-utilities" Feb 18 20:09:46 crc kubenswrapper[4942]: E0218 20:09:46.729046 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="extract-content" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.729052 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="extract-content" Feb 18 20:09:46 crc kubenswrapper[4942]: E0218 20:09:46.729084 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="registry-server" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.729091 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="registry-server" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.729257 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="registry-server" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.730576 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.760295 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.795584 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.795917 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.795988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.898813 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.899200 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.899320 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.899709 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.899800 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.946115 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:47 crc kubenswrapper[4942]: I0218 20:09:47.057323 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:47 crc kubenswrapper[4942]: I0218 20:09:47.589611 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:09:48 crc kubenswrapper[4942]: I0218 20:09:48.194947 4942 generic.go:334] "Generic (PLEG): container finished" podID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerID="461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705" exitCode=0 Feb 18 20:09:48 crc kubenswrapper[4942]: I0218 20:09:48.195269 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerDied","Data":"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705"} Feb 18 20:09:48 crc kubenswrapper[4942]: I0218 20:09:48.195666 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerStarted","Data":"b40cd8f81d01c74449c4f0f506f6c37d26862a0d01d0a884cb9a3cbe588a5959"} Feb 18 20:09:49 crc kubenswrapper[4942]: I0218 20:09:49.036411 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:09:49 crc kubenswrapper[4942]: E0218 20:09:49.037342 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:09:49 crc kubenswrapper[4942]: I0218 20:09:49.207957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerStarted","Data":"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2"} Feb 18 20:09:51 crc kubenswrapper[4942]: I0218 20:09:51.230373 4942 generic.go:334] "Generic (PLEG): container finished" podID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerID="8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2" exitCode=0 Feb 18 20:09:51 crc kubenswrapper[4942]: I0218 20:09:51.230634 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerDied","Data":"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2"} Feb 18 20:09:52 crc kubenswrapper[4942]: I0218 20:09:52.244584 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerStarted","Data":"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba"} Feb 18 20:09:52 crc kubenswrapper[4942]: I0218 20:09:52.292245 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-54nj4" podStartSLOduration=2.784529353 podStartE2EDuration="6.292220387s" podCreationTimestamp="2026-02-18 20:09:46 +0000 UTC" firstStartedPulling="2026-02-18 20:09:48.199998568 +0000 UTC m=+3147.904931233" lastFinishedPulling="2026-02-18 20:09:51.707689562 +0000 UTC m=+3151.412622267" observedRunningTime="2026-02-18 20:09:52.268999823 +0000 UTC m=+3151.973932528" watchObservedRunningTime="2026-02-18 20:09:52.292220387 +0000 UTC m=+3151.997153062" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.058078 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.058799 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.133066 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.381969 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.438587 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:09:59 crc kubenswrapper[4942]: I0218 20:09:59.336503 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-54nj4" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="registry-server" containerID="cri-o://8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" gracePeriod=2 Feb 18 20:09:59 crc kubenswrapper[4942]: I0218 20:09:59.976415 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.037090 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:00 crc kubenswrapper[4942]: E0218 20:10:00.037404 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.088853 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") pod \"8fb258f6-7f5f-4390-914c-c995678e50a1\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.088920 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") pod \"8fb258f6-7f5f-4390-914c-c995678e50a1\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.088944 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") pod \"8fb258f6-7f5f-4390-914c-c995678e50a1\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.089930 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities" (OuterVolumeSpecName: "utilities") pod "8fb258f6-7f5f-4390-914c-c995678e50a1" (UID: "8fb258f6-7f5f-4390-914c-c995678e50a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.097223 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4" (OuterVolumeSpecName: "kube-api-access-6wnh4") pod "8fb258f6-7f5f-4390-914c-c995678e50a1" (UID: "8fb258f6-7f5f-4390-914c-c995678e50a1"). InnerVolumeSpecName "kube-api-access-6wnh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.149232 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fb258f6-7f5f-4390-914c-c995678e50a1" (UID: "8fb258f6-7f5f-4390-914c-c995678e50a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.193438 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.193479 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.193490 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.347885 4942 generic.go:334] "Generic (PLEG): container finished" podID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerID="8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" exitCode=0 Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.347945 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.347952 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerDied","Data":"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba"} Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.348111 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerDied","Data":"b40cd8f81d01c74449c4f0f506f6c37d26862a0d01d0a884cb9a3cbe588a5959"} Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.348160 4942 scope.go:117] "RemoveContainer" containerID="8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.382744 4942 scope.go:117] "RemoveContainer" containerID="8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.403152 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.410346 4942 scope.go:117] "RemoveContainer" containerID="461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.415899 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.474362 4942 scope.go:117] "RemoveContainer" containerID="8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" Feb 18 20:10:00 crc kubenswrapper[4942]: E0218 20:10:00.474843 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba\": container with ID starting with 8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba not found: ID does not exist" containerID="8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.474892 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba"} err="failed to get container status \"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba\": rpc error: code = NotFound desc = could not find container \"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba\": container with ID starting with 8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba not found: ID does not exist" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.474920 4942 scope.go:117] "RemoveContainer" containerID="8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2" Feb 18 20:10:00 crc kubenswrapper[4942]: E0218 20:10:00.475486 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2\": container with ID starting with 8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2 not found: ID does not exist" containerID="8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.475508 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2"} err="failed to get container status \"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2\": rpc error: code = NotFound desc = could not find container \"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2\": container with ID starting with 8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2 not found: ID does not exist" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.475521 4942 scope.go:117] "RemoveContainer" containerID="461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705" Feb 18 20:10:00 crc kubenswrapper[4942]: E0218 20:10:00.475751 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705\": container with ID starting with 461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705 not found: ID does not exist" containerID="461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.475790 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705"} err="failed to get container status \"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705\": rpc error: code = NotFound desc = could not find container \"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705\": container with ID starting with 461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705 not found: ID does not exist" Feb 18 20:10:01 crc kubenswrapper[4942]: I0218 20:10:01.069966 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" path="/var/lib/kubelet/pods/8fb258f6-7f5f-4390-914c-c995678e50a1/volumes" Feb 18 20:10:13 crc kubenswrapper[4942]: I0218 20:10:13.037071 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:13 crc kubenswrapper[4942]: E0218 20:10:13.038343 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:10:28 crc kubenswrapper[4942]: I0218 20:10:28.036363 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:28 crc kubenswrapper[4942]: E0218 20:10:28.037155 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:10:39 crc kubenswrapper[4942]: I0218 20:10:39.039155 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:39 crc kubenswrapper[4942]: E0218 20:10:39.040049 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:10:53 crc kubenswrapper[4942]: I0218 20:10:53.038518 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:53 crc kubenswrapper[4942]: E0218 20:10:53.039470 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:06 crc kubenswrapper[4942]: I0218 20:11:06.037490 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:06 crc kubenswrapper[4942]: E0218 20:11:06.038195 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:20 crc kubenswrapper[4942]: I0218 20:11:20.036516 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:20 crc kubenswrapper[4942]: E0218 20:11:20.037547 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:32 crc kubenswrapper[4942]: I0218 20:11:32.036953 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:32 crc kubenswrapper[4942]: E0218 20:11:32.037831 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:43 crc kubenswrapper[4942]: I0218 20:11:43.037755 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:43 crc kubenswrapper[4942]: E0218 20:11:43.039352 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:55 crc kubenswrapper[4942]: I0218 20:11:55.036846 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:55 crc kubenswrapper[4942]: E0218 20:11:55.037554 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:12:06 crc kubenswrapper[4942]: I0218 20:12:06.036412 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:12:06 crc kubenswrapper[4942]: E0218 20:12:06.037160 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:12:18 crc kubenswrapper[4942]: I0218 20:12:18.037071 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:12:18 crc kubenswrapper[4942]: E0218 20:12:18.040276 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:12:33 crc kubenswrapper[4942]: I0218 20:12:33.036329 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:12:33 crc kubenswrapper[4942]: E0218 20:12:33.037104 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:12:48 crc kubenswrapper[4942]: I0218 20:12:48.035985 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:12:48 crc kubenswrapper[4942]: E0218 20:12:48.036891 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:00 crc kubenswrapper[4942]: I0218 20:13:00.036462 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:00 crc kubenswrapper[4942]: E0218 20:13:00.037637 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:13 crc kubenswrapper[4942]: I0218 20:13:13.036150 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:13 crc kubenswrapper[4942]: E0218 20:13:13.036748 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:27 crc kubenswrapper[4942]: I0218 20:13:27.052734 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:27 crc kubenswrapper[4942]: E0218 20:13:27.055441 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:39 crc kubenswrapper[4942]: I0218 20:13:39.036500 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:39 crc kubenswrapper[4942]: E0218 20:13:39.037421 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.822287 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:13:53 crc kubenswrapper[4942]: E0218 20:13:53.823323 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="extract-utilities" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.823338 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="extract-utilities" Feb 18 20:13:53 crc kubenswrapper[4942]: E0218 20:13:53.823372 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="registry-server" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.823379 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="registry-server" Feb 18 20:13:53 crc kubenswrapper[4942]: E0218 20:13:53.823401 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="extract-content" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.823409 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="extract-content" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.823654 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="registry-server" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.825358 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.834563 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.888022 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.888123 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.888183 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.989365 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.989838 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.990031 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.990211 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.990530 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:54 crc kubenswrapper[4942]: I0218 20:13:54.017669 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:54 crc kubenswrapper[4942]: I0218 20:13:54.035397 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:54 crc kubenswrapper[4942]: E0218 20:13:54.035937 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:54 crc kubenswrapper[4942]: I0218 20:13:54.150246 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.266339 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.938110 4942 generic.go:334] "Generic (PLEG): container finished" podID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerID="4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f" exitCode=0 Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.938179 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerDied","Data":"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f"} Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.938211 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerStarted","Data":"c54b96648e25109d528d6659096fcf53de95dbbd628245057ff76cb7139280b0"} Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.939786 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:13:57 crc kubenswrapper[4942]: I0218 20:13:57.959698 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerStarted","Data":"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee"} Feb 18 20:14:01 crc kubenswrapper[4942]: I0218 20:14:01.996515 4942 generic.go:334] "Generic (PLEG): container finished" podID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerID="cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee" exitCode=0 Feb 18 20:14:01 crc kubenswrapper[4942]: I0218 20:14:01.996583 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerDied","Data":"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee"} Feb 18 20:14:03 crc kubenswrapper[4942]: I0218 20:14:03.008075 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerStarted","Data":"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526"} Feb 18 20:14:03 crc kubenswrapper[4942]: I0218 20:14:03.029796 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nfjgd" podStartSLOduration=3.530061087 podStartE2EDuration="10.02977845s" podCreationTimestamp="2026-02-18 20:13:53 +0000 UTC" firstStartedPulling="2026-02-18 20:13:55.939586574 +0000 UTC m=+3395.644519239" lastFinishedPulling="2026-02-18 20:14:02.439303927 +0000 UTC m=+3402.144236602" observedRunningTime="2026-02-18 20:14:03.025702932 +0000 UTC m=+3402.730635607" watchObservedRunningTime="2026-02-18 20:14:03.02977845 +0000 UTC m=+3402.734711105" Feb 18 20:14:04 crc kubenswrapper[4942]: I0218 20:14:04.151333 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:04 crc kubenswrapper[4942]: I0218 20:14:04.152635 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:05 crc kubenswrapper[4942]: I0218 20:14:05.218086 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nfjgd" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" probeResult="failure" output=< Feb 18 20:14:05 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:14:05 crc kubenswrapper[4942]: > Feb 18 20:14:08 crc kubenswrapper[4942]: I0218 20:14:08.036255 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:14:08 crc kubenswrapper[4942]: E0218 20:14:08.036805 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.318107 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.320913 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.340354 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.481565 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.481822 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.481926 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.583911 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.584237 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.584284 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.584546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.584644 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.604036 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.646025 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:14 crc kubenswrapper[4942]: W0218 20:14:14.151126 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf7f8c4_fc64_4175_91fc_88159872b42c.slice/crio-64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4 WatchSource:0}: Error finding container 64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4: Status 404 returned error can't find the container with id 64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4 Feb 18 20:14:14 crc kubenswrapper[4942]: I0218 20:14:14.192996 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:15 crc kubenswrapper[4942]: I0218 20:14:15.130597 4942 generic.go:334] "Generic (PLEG): container finished" podID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerID="bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a" exitCode=0 Feb 18 20:14:15 crc kubenswrapper[4942]: I0218 20:14:15.130707 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerDied","Data":"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a"} Feb 18 20:14:15 crc kubenswrapper[4942]: I0218 20:14:15.131713 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerStarted","Data":"64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4"} Feb 18 20:14:15 crc kubenswrapper[4942]: I0218 20:14:15.212720 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nfjgd" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" probeResult="failure" output=< Feb 18 20:14:15 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:14:15 crc kubenswrapper[4942]: > Feb 18 20:14:16 crc kubenswrapper[4942]: I0218 20:14:16.141286 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerStarted","Data":"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0"} Feb 18 20:14:18 crc kubenswrapper[4942]: I0218 20:14:18.166295 4942 generic.go:334] "Generic (PLEG): container finished" podID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerID="c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0" exitCode=0 Feb 18 20:14:18 crc kubenswrapper[4942]: I0218 20:14:18.166378 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerDied","Data":"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0"} Feb 18 20:14:19 crc kubenswrapper[4942]: I0218 20:14:19.036032 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:14:19 crc kubenswrapper[4942]: E0218 20:14:19.036554 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:14:19 crc kubenswrapper[4942]: I0218 20:14:19.178629 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerStarted","Data":"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253"} Feb 18 20:14:19 crc kubenswrapper[4942]: I0218 20:14:19.205588 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4sdkv" podStartSLOduration=2.770162916 podStartE2EDuration="6.205568668s" podCreationTimestamp="2026-02-18 20:14:13 +0000 UTC" firstStartedPulling="2026-02-18 20:14:15.132873256 +0000 UTC m=+3414.837805921" lastFinishedPulling="2026-02-18 20:14:18.568279008 +0000 UTC m=+3418.273211673" observedRunningTime="2026-02-18 20:14:19.203339019 +0000 UTC m=+3418.908271694" watchObservedRunningTime="2026-02-18 20:14:19.205568668 +0000 UTC m=+3418.910501343" Feb 18 20:14:23 crc kubenswrapper[4942]: I0218 20:14:23.647206 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:23 crc kubenswrapper[4942]: I0218 20:14:23.649001 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:24 crc kubenswrapper[4942]: I0218 20:14:24.706404 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4sdkv" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" probeResult="failure" output=< Feb 18 20:14:24 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:14:24 crc kubenswrapper[4942]: > Feb 18 20:14:25 crc kubenswrapper[4942]: I0218 20:14:25.212092 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nfjgd" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" probeResult="failure" output=< Feb 18 20:14:25 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:14:25 crc kubenswrapper[4942]: > Feb 18 20:14:32 crc kubenswrapper[4942]: I0218 20:14:32.036706 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:14:32 crc kubenswrapper[4942]: I0218 20:14:32.319522 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19"} Feb 18 20:14:33 crc kubenswrapper[4942]: I0218 20:14:33.706286 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:33 crc kubenswrapper[4942]: I0218 20:14:33.767595 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:33 crc kubenswrapper[4942]: I0218 20:14:33.944743 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:34 crc kubenswrapper[4942]: I0218 20:14:34.206552 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:34 crc kubenswrapper[4942]: I0218 20:14:34.269874 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:35 crc kubenswrapper[4942]: I0218 20:14:35.350939 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4sdkv" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" containerID="cri-o://1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" gracePeriod=2 Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.344564 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.345104 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nfjgd" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" containerID="cri-o://7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" gracePeriod=2 Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.345547 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360478 4942 generic.go:334] "Generic (PLEG): container finished" podID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerID="1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" exitCode=0 Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360521 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerDied","Data":"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253"} Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360547 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerDied","Data":"64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4"} Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360556 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360576 4942 scope.go:117] "RemoveContainer" containerID="1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.386602 4942 scope.go:117] "RemoveContainer" containerID="c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.389436 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") pod \"faf7f8c4-fc64-4175-91fc-88159872b42c\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.389510 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") pod \"faf7f8c4-fc64-4175-91fc-88159872b42c\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.389568 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") pod \"faf7f8c4-fc64-4175-91fc-88159872b42c\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.390683 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities" (OuterVolumeSpecName: "utilities") pod "faf7f8c4-fc64-4175-91fc-88159872b42c" (UID: "faf7f8c4-fc64-4175-91fc-88159872b42c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.415325 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faf7f8c4-fc64-4175-91fc-88159872b42c" (UID: "faf7f8c4-fc64-4175-91fc-88159872b42c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.426955 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg" (OuterVolumeSpecName: "kube-api-access-czrkg") pod "faf7f8c4-fc64-4175-91fc-88159872b42c" (UID: "faf7f8c4-fc64-4175-91fc-88159872b42c"). InnerVolumeSpecName "kube-api-access-czrkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.438401 4942 scope.go:117] "RemoveContainer" containerID="bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.492042 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.492087 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.492099 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.620401 4942 scope.go:117] "RemoveContainer" containerID="1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" Feb 18 20:14:36 crc kubenswrapper[4942]: E0218 20:14:36.622837 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253\": container with ID starting with 1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253 not found: ID does not exist" containerID="1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.622877 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253"} err="failed to get container status \"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253\": rpc error: code = NotFound desc = could not find container \"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253\": container with ID starting with 1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253 not found: ID does not exist" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.622903 4942 scope.go:117] "RemoveContainer" containerID="c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0" Feb 18 20:14:36 crc kubenswrapper[4942]: E0218 20:14:36.627000 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0\": container with ID starting with c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0 not found: ID does not exist" containerID="c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.627046 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0"} err="failed to get container status \"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0\": rpc error: code = NotFound desc = could not find container \"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0\": container with ID starting with c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0 not found: ID does not exist" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.627075 4942 scope.go:117] "RemoveContainer" containerID="bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a" Feb 18 20:14:36 crc kubenswrapper[4942]: E0218 20:14:36.627359 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a\": container with ID starting with bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a not found: ID does not exist" containerID="bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.627380 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a"} err="failed to get container status \"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a\": rpc error: code = NotFound desc = could not find container \"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a\": container with ID starting with bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a not found: ID does not exist" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.698591 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.709028 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.808340 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.898941 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") pod \"5d21c90f-12fc-4f90-a74e-8da0266710d6\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.899025 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") pod \"5d21c90f-12fc-4f90-a74e-8da0266710d6\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.899326 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") pod \"5d21c90f-12fc-4f90-a74e-8da0266710d6\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.900030 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities" (OuterVolumeSpecName: "utilities") pod "5d21c90f-12fc-4f90-a74e-8da0266710d6" (UID: "5d21c90f-12fc-4f90-a74e-8da0266710d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.904159 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms" (OuterVolumeSpecName: "kube-api-access-pwbms") pod "5d21c90f-12fc-4f90-a74e-8da0266710d6" (UID: "5d21c90f-12fc-4f90-a74e-8da0266710d6"). InnerVolumeSpecName "kube-api-access-pwbms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.002065 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.002460 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.019326 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d21c90f-12fc-4f90-a74e-8da0266710d6" (UID: "5d21c90f-12fc-4f90-a74e-8da0266710d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.049309 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" path="/var/lib/kubelet/pods/faf7f8c4-fc64-4175-91fc-88159872b42c/volumes" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.105344 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372437 4942 generic.go:334] "Generic (PLEG): container finished" podID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerID="7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" exitCode=0 Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372651 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372669 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerDied","Data":"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526"} Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372731 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerDied","Data":"c54b96648e25109d528d6659096fcf53de95dbbd628245057ff76cb7139280b0"} Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372790 4942 scope.go:117] "RemoveContainer" containerID="7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.397245 4942 scope.go:117] "RemoveContainer" containerID="cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.407378 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.418343 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.442594 4942 scope.go:117] "RemoveContainer" containerID="4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.464298 4942 scope.go:117] "RemoveContainer" containerID="7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" Feb 18 20:14:37 crc kubenswrapper[4942]: E0218 20:14:37.464897 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526\": container with ID starting with 7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526 not found: ID does not exist" containerID="7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.464949 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526"} err="failed to get container status \"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526\": rpc error: code = NotFound desc = could not find container \"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526\": container with ID starting with 7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526 not found: ID does not exist" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.465002 4942 scope.go:117] "RemoveContainer" containerID="cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee" Feb 18 20:14:37 crc kubenswrapper[4942]: E0218 20:14:37.465478 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee\": container with ID starting with cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee not found: ID does not exist" containerID="cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.465511 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee"} err="failed to get container status \"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee\": rpc error: code = NotFound desc = could not find container \"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee\": container with ID starting with cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee not found: ID does not exist" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.465530 4942 scope.go:117] "RemoveContainer" containerID="4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f" Feb 18 20:14:37 crc kubenswrapper[4942]: E0218 20:14:37.467288 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f\": container with ID starting with 4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f not found: ID does not exist" containerID="4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.467324 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f"} err="failed to get container status \"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f\": rpc error: code = NotFound desc = could not find container \"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f\": container with ID starting with 4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f not found: ID does not exist" Feb 18 20:14:39 crc kubenswrapper[4942]: I0218 20:14:39.048425 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" path="/var/lib/kubelet/pods/5d21c90f-12fc-4f90-a74e-8da0266710d6/volumes" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.319458 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320522 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="extract-utilities" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320538 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="extract-utilities" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320564 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="extract-content" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320572 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="extract-content" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320599 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320608 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320629 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320637 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320658 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="extract-utilities" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320666 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="extract-utilities" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320685 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="extract-content" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320693 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="extract-content" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320968 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320991 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.322922 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.330408 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.467605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.467888 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.468080 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.568984 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.569085 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.569143 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.569506 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.569814 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.588805 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.642554 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:59 crc kubenswrapper[4942]: I0218 20:14:59.135570 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:14:59 crc kubenswrapper[4942]: I0218 20:14:59.584185 4942 generic.go:334] "Generic (PLEG): container finished" podID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerID="931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d" exitCode=0 Feb 18 20:14:59 crc kubenswrapper[4942]: I0218 20:14:59.584259 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerDied","Data":"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d"} Feb 18 20:14:59 crc kubenswrapper[4942]: I0218 20:14:59.584473 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerStarted","Data":"4135badf274346a5d8726ac2a925164017752639dc54d9d9ac76c44933d06402"} Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.146127 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd"] Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.147879 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.150032 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.154998 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.157546 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd"] Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.297530 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.297646 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.297726 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.400542 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.400700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.400828 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.401588 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.407261 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.418454 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.468154 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:01 crc kubenswrapper[4942]: W0218 20:15:01.171314 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81acc89a_7a32_4040_93b5_5332398d6374.slice/crio-230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2 WatchSource:0}: Error finding container 230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2: Status 404 returned error can't find the container with id 230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2 Feb 18 20:15:01 crc kubenswrapper[4942]: I0218 20:15:01.173593 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd"] Feb 18 20:15:01 crc kubenswrapper[4942]: I0218 20:15:01.605637 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerStarted","Data":"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5"} Feb 18 20:15:01 crc kubenswrapper[4942]: I0218 20:15:01.607222 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" event={"ID":"81acc89a-7a32-4040-93b5-5332398d6374","Type":"ContainerStarted","Data":"75fa89dd848d4145951f50b9174b52dadf015d0268cd8ea1b9dbd6a82f591ee1"} Feb 18 20:15:01 crc kubenswrapper[4942]: I0218 20:15:01.607254 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" event={"ID":"81acc89a-7a32-4040-93b5-5332398d6374","Type":"ContainerStarted","Data":"230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2"} Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.618399 4942 generic.go:334] "Generic (PLEG): container finished" podID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerID="7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5" exitCode=0 Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.618500 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerDied","Data":"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5"} Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.622628 4942 generic.go:334] "Generic (PLEG): container finished" podID="81acc89a-7a32-4040-93b5-5332398d6374" containerID="75fa89dd848d4145951f50b9174b52dadf015d0268cd8ea1b9dbd6a82f591ee1" exitCode=0 Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.622690 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" event={"ID":"81acc89a-7a32-4040-93b5-5332398d6374","Type":"ContainerDied","Data":"75fa89dd848d4145951f50b9174b52dadf015d0268cd8ea1b9dbd6a82f591ee1"} Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.970425 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.151361 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") pod \"81acc89a-7a32-4040-93b5-5332398d6374\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.151468 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") pod \"81acc89a-7a32-4040-93b5-5332398d6374\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.151597 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") pod \"81acc89a-7a32-4040-93b5-5332398d6374\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.153372 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume" (OuterVolumeSpecName: "config-volume") pod "81acc89a-7a32-4040-93b5-5332398d6374" (UID: "81acc89a-7a32-4040-93b5-5332398d6374"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.160024 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9" (OuterVolumeSpecName: "kube-api-access-jwzh9") pod "81acc89a-7a32-4040-93b5-5332398d6374" (UID: "81acc89a-7a32-4040-93b5-5332398d6374"). InnerVolumeSpecName "kube-api-access-jwzh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.160122 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81acc89a-7a32-4040-93b5-5332398d6374" (UID: "81acc89a-7a32-4040-93b5-5332398d6374"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.254121 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.254455 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.254470 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.635122 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerStarted","Data":"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77"} Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.637809 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" event={"ID":"81acc89a-7a32-4040-93b5-5332398d6374","Type":"ContainerDied","Data":"230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2"} Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.637840 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.637853 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.665345 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r6dvm" podStartSLOduration=1.975015827 podStartE2EDuration="5.665328849s" podCreationTimestamp="2026-02-18 20:14:58 +0000 UTC" firstStartedPulling="2026-02-18 20:14:59.585923579 +0000 UTC m=+3459.290856244" lastFinishedPulling="2026-02-18 20:15:03.276236591 +0000 UTC m=+3462.981169266" observedRunningTime="2026-02-18 20:15:03.656340881 +0000 UTC m=+3463.361273546" watchObservedRunningTime="2026-02-18 20:15:03.665328849 +0000 UTC m=+3463.370261514" Feb 18 20:15:04 crc kubenswrapper[4942]: I0218 20:15:04.057491 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 20:15:04 crc kubenswrapper[4942]: I0218 20:15:04.066435 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 20:15:05 crc kubenswrapper[4942]: I0218 20:15:05.050173 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e6e4f2-9597-4f04-aa2d-d60b56446486" path="/var/lib/kubelet/pods/50e6e4f2-9597-4f04-aa2d-d60b56446486/volumes" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.643161 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.643885 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.707492 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.753371 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.947429 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:15:10 crc kubenswrapper[4942]: I0218 20:15:10.695861 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r6dvm" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="registry-server" containerID="cri-o://35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" gracePeriod=2 Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.173353 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.242698 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") pod \"53c5bf36-8646-4dfb-a736-038ae98719e0\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.242883 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") pod \"53c5bf36-8646-4dfb-a736-038ae98719e0\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.243005 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") pod \"53c5bf36-8646-4dfb-a736-038ae98719e0\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.244100 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities" (OuterVolumeSpecName: "utilities") pod "53c5bf36-8646-4dfb-a736-038ae98719e0" (UID: "53c5bf36-8646-4dfb-a736-038ae98719e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.255978 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4" (OuterVolumeSpecName: "kube-api-access-6gfk4") pod "53c5bf36-8646-4dfb-a736-038ae98719e0" (UID: "53c5bf36-8646-4dfb-a736-038ae98719e0"). InnerVolumeSpecName "kube-api-access-6gfk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.344316 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.344356 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.707600 4942 generic.go:334] "Generic (PLEG): container finished" podID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerID="35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" exitCode=0 Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.707653 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.707677 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerDied","Data":"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77"} Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.708064 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerDied","Data":"4135badf274346a5d8726ac2a925164017752639dc54d9d9ac76c44933d06402"} Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.708087 4942 scope.go:117] "RemoveContainer" containerID="35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.726858 4942 scope.go:117] "RemoveContainer" containerID="7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.771182 4942 scope.go:117] "RemoveContainer" containerID="931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.808390 4942 scope.go:117] "RemoveContainer" containerID="35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" Feb 18 20:15:11 crc kubenswrapper[4942]: E0218 20:15:11.808936 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77\": container with ID starting with 35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77 not found: ID does not exist" containerID="35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.808981 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77"} err="failed to get container status \"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77\": rpc error: code = NotFound desc = could not find container \"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77\": container with ID starting with 35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77 not found: ID does not exist" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.809010 4942 scope.go:117] "RemoveContainer" containerID="7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5" Feb 18 20:15:11 crc kubenswrapper[4942]: E0218 20:15:11.809542 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5\": container with ID starting with 7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5 not found: ID does not exist" containerID="7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.809563 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5"} err="failed to get container status \"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5\": rpc error: code = NotFound desc = could not find container \"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5\": container with ID starting with 7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5 not found: ID does not exist" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.809576 4942 scope.go:117] "RemoveContainer" containerID="931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d" Feb 18 20:15:11 crc kubenswrapper[4942]: E0218 20:15:11.809994 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d\": container with ID starting with 931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d not found: ID does not exist" containerID="931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.810037 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d"} err="failed to get container status \"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d\": rpc error: code = NotFound desc = could not find container \"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d\": container with ID starting with 931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d not found: ID does not exist" Feb 18 20:15:12 crc kubenswrapper[4942]: I0218 20:15:12.254599 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53c5bf36-8646-4dfb-a736-038ae98719e0" (UID: "53c5bf36-8646-4dfb-a736-038ae98719e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:12 crc kubenswrapper[4942]: I0218 20:15:12.261265 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:12 crc kubenswrapper[4942]: I0218 20:15:12.340449 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:15:12 crc kubenswrapper[4942]: I0218 20:15:12.350681 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:15:13 crc kubenswrapper[4942]: I0218 20:15:13.045861 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" path="/var/lib/kubelet/pods/53c5bf36-8646-4dfb-a736-038ae98719e0/volumes" Feb 18 20:15:50 crc kubenswrapper[4942]: I0218 20:15:50.361374 4942 scope.go:117] "RemoveContainer" containerID="45f611558efef294793c691f22c0d11c4ce92907ad4ca205006156562d59216c" Feb 18 20:16:53 crc kubenswrapper[4942]: I0218 20:16:53.741392 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:16:53 crc kubenswrapper[4942]: I0218 20:16:53.741963 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:17:23 crc kubenswrapper[4942]: I0218 20:17:23.740525 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:17:23 crc kubenswrapper[4942]: I0218 20:17:23.741222 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.740754 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.741296 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.741348 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.742179 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.742237 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19" gracePeriod=600 Feb 18 20:17:54 crc kubenswrapper[4942]: I0218 20:17:54.673356 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19" exitCode=0 Feb 18 20:17:54 crc kubenswrapper[4942]: I0218 20:17:54.673441 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19"} Feb 18 20:17:54 crc kubenswrapper[4942]: I0218 20:17:54.674084 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769"} Feb 18 20:17:54 crc kubenswrapper[4942]: I0218 20:17:54.674161 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:20:23 crc kubenswrapper[4942]: I0218 20:20:23.740405 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:20:23 crc kubenswrapper[4942]: I0218 20:20:23.740977 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:20:53 crc kubenswrapper[4942]: I0218 20:20:53.740624 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:20:53 crc kubenswrapper[4942]: I0218 20:20:53.741493 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.740555 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.741053 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.741099 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.741970 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.742030 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" gracePeriod=600 Feb 18 20:21:23 crc kubenswrapper[4942]: E0218 20:21:23.867907 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:21:24 crc kubenswrapper[4942]: I0218 20:21:24.868160 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" exitCode=0 Feb 18 20:21:24 crc kubenswrapper[4942]: I0218 20:21:24.868290 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769"} Feb 18 20:21:24 crc kubenswrapper[4942]: I0218 20:21:24.868882 4942 scope.go:117] "RemoveContainer" containerID="4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19" Feb 18 20:21:24 crc kubenswrapper[4942]: I0218 20:21:24.869615 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:21:24 crc kubenswrapper[4942]: E0218 20:21:24.870163 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:21:39 crc kubenswrapper[4942]: I0218 20:21:39.036440 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:21:39 crc kubenswrapper[4942]: E0218 20:21:39.037730 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:21:50 crc kubenswrapper[4942]: I0218 20:21:50.036096 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:21:50 crc kubenswrapper[4942]: E0218 20:21:50.036958 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:01 crc kubenswrapper[4942]: I0218 20:22:01.053488 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:01 crc kubenswrapper[4942]: E0218 20:22:01.056352 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:13 crc kubenswrapper[4942]: I0218 20:22:13.035900 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:13 crc kubenswrapper[4942]: E0218 20:22:13.037150 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:25 crc kubenswrapper[4942]: I0218 20:22:25.036457 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:25 crc kubenswrapper[4942]: E0218 20:22:25.037276 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:38 crc kubenswrapper[4942]: I0218 20:22:38.036506 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:38 crc kubenswrapper[4942]: E0218 20:22:38.037434 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:52 crc kubenswrapper[4942]: I0218 20:22:52.037222 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:52 crc kubenswrapper[4942]: E0218 20:22:52.038389 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:05 crc kubenswrapper[4942]: I0218 20:23:05.035553 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:05 crc kubenswrapper[4942]: E0218 20:23:05.036429 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:19 crc kubenswrapper[4942]: I0218 20:23:19.036196 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:19 crc kubenswrapper[4942]: E0218 20:23:19.036932 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:33 crc kubenswrapper[4942]: I0218 20:23:33.035954 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:33 crc kubenswrapper[4942]: E0218 20:23:33.036627 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.639894 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:36 crc kubenswrapper[4942]: E0218 20:23:36.640633 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81acc89a-7a32-4040-93b5-5332398d6374" containerName="collect-profiles" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640645 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="81acc89a-7a32-4040-93b5-5332398d6374" containerName="collect-profiles" Feb 18 20:23:36 crc kubenswrapper[4942]: E0218 20:23:36.640676 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="extract-content" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640682 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="extract-content" Feb 18 20:23:36 crc kubenswrapper[4942]: E0218 20:23:36.640696 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="extract-utilities" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640702 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="extract-utilities" Feb 18 20:23:36 crc kubenswrapper[4942]: E0218 20:23:36.640722 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="registry-server" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640729 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="registry-server" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640947 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="81acc89a-7a32-4040-93b5-5332398d6374" containerName="collect-profiles" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640966 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="registry-server" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.642311 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.658027 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.791550 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.791622 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.791776 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893239 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893334 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893427 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893974 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893992 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.913548 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:37 crc kubenswrapper[4942]: I0218 20:23:37.001569 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:37 crc kubenswrapper[4942]: I0218 20:23:37.555259 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:38 crc kubenswrapper[4942]: I0218 20:23:38.329827 4942 generic.go:334] "Generic (PLEG): container finished" podID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerID="ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087" exitCode=0 Feb 18 20:23:38 crc kubenswrapper[4942]: I0218 20:23:38.330207 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerDied","Data":"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087"} Feb 18 20:23:38 crc kubenswrapper[4942]: I0218 20:23:38.330253 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerStarted","Data":"b9cc4d0833f807ebae8c60f122da6c2a174c37c08c998b0062cddc24f7779bfc"} Feb 18 20:23:38 crc kubenswrapper[4942]: I0218 20:23:38.336892 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:23:39 crc kubenswrapper[4942]: I0218 20:23:39.341186 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerStarted","Data":"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1"} Feb 18 20:23:41 crc kubenswrapper[4942]: I0218 20:23:41.363876 4942 generic.go:334] "Generic (PLEG): container finished" podID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerID="03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1" exitCode=0 Feb 18 20:23:41 crc kubenswrapper[4942]: I0218 20:23:41.364073 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerDied","Data":"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1"} Feb 18 20:23:42 crc kubenswrapper[4942]: I0218 20:23:42.377594 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerStarted","Data":"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40"} Feb 18 20:23:44 crc kubenswrapper[4942]: I0218 20:23:44.037498 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:44 crc kubenswrapper[4942]: E0218 20:23:44.038179 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.002607 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.003172 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.083801 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.113905 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b5g55" podStartSLOduration=7.638243856 podStartE2EDuration="11.113884617s" podCreationTimestamp="2026-02-18 20:23:36 +0000 UTC" firstStartedPulling="2026-02-18 20:23:38.334565713 +0000 UTC m=+3978.039498388" lastFinishedPulling="2026-02-18 20:23:41.810206474 +0000 UTC m=+3981.515139149" observedRunningTime="2026-02-18 20:23:42.403337817 +0000 UTC m=+3982.108270512" watchObservedRunningTime="2026-02-18 20:23:47.113884617 +0000 UTC m=+3986.818817282" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.502385 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:48 crc kubenswrapper[4942]: I0218 20:23:48.344370 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:49 crc kubenswrapper[4942]: I0218 20:23:49.454524 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b5g55" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="registry-server" containerID="cri-o://3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" gracePeriod=2 Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.130307 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.227384 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") pod \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.227511 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") pod \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.227561 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") pod \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.229541 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities" (OuterVolumeSpecName: "utilities") pod "65f646d4-3b0a-4e0a-937c-a2452f28d07a" (UID: "65f646d4-3b0a-4e0a-937c-a2452f28d07a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.235989 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq" (OuterVolumeSpecName: "kube-api-access-zwwpq") pod "65f646d4-3b0a-4e0a-937c-a2452f28d07a" (UID: "65f646d4-3b0a-4e0a-937c-a2452f28d07a"). InnerVolumeSpecName "kube-api-access-zwwpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.309026 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65f646d4-3b0a-4e0a-937c-a2452f28d07a" (UID: "65f646d4-3b0a-4e0a-937c-a2452f28d07a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.330527 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.330570 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.330587 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") on node \"crc\" DevicePath \"\"" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468829 4942 generic.go:334] "Generic (PLEG): container finished" podID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerID="3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" exitCode=0 Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468880 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerDied","Data":"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40"} Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468917 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerDied","Data":"b9cc4d0833f807ebae8c60f122da6c2a174c37c08c998b0062cddc24f7779bfc"} Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468939 4942 scope.go:117] "RemoveContainer" containerID="3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468945 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.502960 4942 scope.go:117] "RemoveContainer" containerID="03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.538187 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.550001 4942 scope.go:117] "RemoveContainer" containerID="ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.550601 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.603818 4942 scope.go:117] "RemoveContainer" containerID="3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" Feb 18 20:23:50 crc kubenswrapper[4942]: E0218 20:23:50.604479 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40\": container with ID starting with 3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40 not found: ID does not exist" containerID="3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.604551 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40"} err="failed to get container status \"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40\": rpc error: code = NotFound desc = could not find container \"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40\": container with ID starting with 3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40 not found: ID does not exist" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.604594 4942 scope.go:117] "RemoveContainer" containerID="03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1" Feb 18 20:23:50 crc kubenswrapper[4942]: E0218 20:23:50.605155 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1\": container with ID starting with 03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1 not found: ID does not exist" containerID="03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.605218 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1"} err="failed to get container status \"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1\": rpc error: code = NotFound desc = could not find container \"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1\": container with ID starting with 03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1 not found: ID does not exist" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.605255 4942 scope.go:117] "RemoveContainer" containerID="ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087" Feb 18 20:23:50 crc kubenswrapper[4942]: E0218 20:23:50.606482 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087\": container with ID starting with ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087 not found: ID does not exist" containerID="ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.606521 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087"} err="failed to get container status \"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087\": rpc error: code = NotFound desc = could not find container \"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087\": container with ID starting with ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087 not found: ID does not exist" Feb 18 20:23:51 crc kubenswrapper[4942]: I0218 20:23:51.052605 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" path="/var/lib/kubelet/pods/65f646d4-3b0a-4e0a-937c-a2452f28d07a/volumes" Feb 18 20:23:55 crc kubenswrapper[4942]: I0218 20:23:55.036232 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:55 crc kubenswrapper[4942]: E0218 20:23:55.037276 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.010692 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:08 crc kubenswrapper[4942]: E0218 20:24:08.015446 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="extract-content" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.015464 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="extract-content" Feb 18 20:24:08 crc kubenswrapper[4942]: E0218 20:24:08.015487 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="extract-utilities" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.015494 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="extract-utilities" Feb 18 20:24:08 crc kubenswrapper[4942]: E0218 20:24:08.015504 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="registry-server" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.015510 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="registry-server" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.015699 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="registry-server" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.017497 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.023954 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.103119 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.103430 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.103484 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206310 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206469 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206945 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.771139 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.938071 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:09 crc kubenswrapper[4942]: I0218 20:24:09.392791 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:09 crc kubenswrapper[4942]: I0218 20:24:09.690146 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerStarted","Data":"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1"} Feb 18 20:24:09 crc kubenswrapper[4942]: I0218 20:24:09.690412 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerStarted","Data":"bec3723b86d98bbf735cea6ac5f7c58c09652650f8968f3e83bbea71103f6f9b"} Feb 18 20:24:10 crc kubenswrapper[4942]: I0218 20:24:10.036219 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:24:10 crc kubenswrapper[4942]: E0218 20:24:10.036802 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:24:10 crc kubenswrapper[4942]: I0218 20:24:10.702709 4942 generic.go:334] "Generic (PLEG): container finished" podID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerID="cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1" exitCode=0 Feb 18 20:24:10 crc kubenswrapper[4942]: I0218 20:24:10.702805 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerDied","Data":"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1"} Feb 18 20:24:11 crc kubenswrapper[4942]: I0218 20:24:11.713381 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerStarted","Data":"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff"} Feb 18 20:24:16 crc kubenswrapper[4942]: I0218 20:24:16.788267 4942 generic.go:334] "Generic (PLEG): container finished" podID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerID="a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff" exitCode=0 Feb 18 20:24:16 crc kubenswrapper[4942]: I0218 20:24:16.788383 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerDied","Data":"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff"} Feb 18 20:24:17 crc kubenswrapper[4942]: I0218 20:24:17.802275 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerStarted","Data":"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c"} Feb 18 20:24:17 crc kubenswrapper[4942]: I0218 20:24:17.834549 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tkmxc" podStartSLOduration=4.374556807 podStartE2EDuration="10.834531604s" podCreationTimestamp="2026-02-18 20:24:07 +0000 UTC" firstStartedPulling="2026-02-18 20:24:10.706156696 +0000 UTC m=+4010.411089401" lastFinishedPulling="2026-02-18 20:24:17.166131533 +0000 UTC m=+4016.871064198" observedRunningTime="2026-02-18 20:24:17.821944501 +0000 UTC m=+4017.526877176" watchObservedRunningTime="2026-02-18 20:24:17.834531604 +0000 UTC m=+4017.539464269" Feb 18 20:24:18 crc kubenswrapper[4942]: I0218 20:24:18.939170 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:18 crc kubenswrapper[4942]: I0218 20:24:18.939242 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:19 crc kubenswrapper[4942]: I0218 20:24:19.997026 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tkmxc" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" probeResult="failure" output=< Feb 18 20:24:19 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:24:19 crc kubenswrapper[4942]: > Feb 18 20:24:24 crc kubenswrapper[4942]: I0218 20:24:24.036347 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:24:24 crc kubenswrapper[4942]: E0218 20:24:24.037430 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:24:29 crc kubenswrapper[4942]: I0218 20:24:29.001900 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:29 crc kubenswrapper[4942]: I0218 20:24:29.058714 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:29 crc kubenswrapper[4942]: I0218 20:24:29.249286 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:30 crc kubenswrapper[4942]: I0218 20:24:30.942916 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tkmxc" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" containerID="cri-o://e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" gracePeriod=2 Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.506457 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.608529 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") pod \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.608627 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") pod \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.608756 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") pod \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.609713 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities" (OuterVolumeSpecName: "utilities") pod "39477d0f-18a2-4113-9d72-f0ea81f9fae0" (UID: "39477d0f-18a2-4113-9d72-f0ea81f9fae0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.618053 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j" (OuterVolumeSpecName: "kube-api-access-rxr9j") pod "39477d0f-18a2-4113-9d72-f0ea81f9fae0" (UID: "39477d0f-18a2-4113-9d72-f0ea81f9fae0"). InnerVolumeSpecName "kube-api-access-rxr9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.711137 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.711181 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.762453 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39477d0f-18a2-4113-9d72-f0ea81f9fae0" (UID: "39477d0f-18a2-4113-9d72-f0ea81f9fae0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.813435 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.958227 4942 generic.go:334] "Generic (PLEG): container finished" podID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerID="e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" exitCode=0 Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.958299 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.958326 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerDied","Data":"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c"} Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.959628 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerDied","Data":"bec3723b86d98bbf735cea6ac5f7c58c09652650f8968f3e83bbea71103f6f9b"} Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.959657 4942 scope.go:117] "RemoveContainer" containerID="e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.993843 4942 scope.go:117] "RemoveContainer" containerID="a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.996577 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.004324 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.028261 4942 scope.go:117] "RemoveContainer" containerID="cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.085323 4942 scope.go:117] "RemoveContainer" containerID="e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" Feb 18 20:24:32 crc kubenswrapper[4942]: E0218 20:24:32.085885 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c\": container with ID starting with e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c not found: ID does not exist" containerID="e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.086023 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c"} err="failed to get container status \"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c\": rpc error: code = NotFound desc = could not find container \"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c\": container with ID starting with e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c not found: ID does not exist" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.086108 4942 scope.go:117] "RemoveContainer" containerID="a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff" Feb 18 20:24:32 crc kubenswrapper[4942]: E0218 20:24:32.086592 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff\": container with ID starting with a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff not found: ID does not exist" containerID="a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.086622 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff"} err="failed to get container status \"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff\": rpc error: code = NotFound desc = could not find container \"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff\": container with ID starting with a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff not found: ID does not exist" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.086644 4942 scope.go:117] "RemoveContainer" containerID="cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1" Feb 18 20:24:32 crc kubenswrapper[4942]: E0218 20:24:32.086964 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1\": container with ID starting with cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1 not found: ID does not exist" containerID="cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.087114 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1"} err="failed to get container status \"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1\": rpc error: code = NotFound desc = could not find container \"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1\": container with ID starting with cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1 not found: ID does not exist" Feb 18 20:24:33 crc kubenswrapper[4942]: I0218 20:24:33.049997 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" path="/var/lib/kubelet/pods/39477d0f-18a2-4113-9d72-f0ea81f9fae0/volumes" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.418808 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:35 crc kubenswrapper[4942]: E0218 20:24:35.419338 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.419353 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" Feb 18 20:24:35 crc kubenswrapper[4942]: E0218 20:24:35.419365 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="extract-content" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.419372 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="extract-content" Feb 18 20:24:35 crc kubenswrapper[4942]: E0218 20:24:35.419392 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="extract-utilities" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.419398 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="extract-utilities" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.419566 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.421040 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.434554 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.594339 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.594494 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.594597 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.696631 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.696716 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.696784 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.697398 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.697697 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:36 crc kubenswrapper[4942]: I0218 20:24:36.173338 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:36 crc kubenswrapper[4942]: I0218 20:24:36.349954 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:36 crc kubenswrapper[4942]: I0218 20:24:36.838199 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:37 crc kubenswrapper[4942]: I0218 20:24:37.023069 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerStarted","Data":"c9b0fb017ab3cc6daf6a8949493b9a9ef3c2c4268df580e987c1eed967f4d2fa"} Feb 18 20:24:38 crc kubenswrapper[4942]: I0218 20:24:38.036219 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:24:38 crc kubenswrapper[4942]: E0218 20:24:38.037130 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:24:38 crc kubenswrapper[4942]: I0218 20:24:38.038368 4942 generic.go:334] "Generic (PLEG): container finished" podID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerID="c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3" exitCode=0 Feb 18 20:24:38 crc kubenswrapper[4942]: I0218 20:24:38.038429 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerDied","Data":"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3"} Feb 18 20:24:40 crc kubenswrapper[4942]: I0218 20:24:40.062161 4942 generic.go:334] "Generic (PLEG): container finished" podID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerID="0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47" exitCode=0 Feb 18 20:24:40 crc kubenswrapper[4942]: I0218 20:24:40.062251 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerDied","Data":"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47"} Feb 18 20:24:41 crc kubenswrapper[4942]: I0218 20:24:41.075188 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerStarted","Data":"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59"} Feb 18 20:24:41 crc kubenswrapper[4942]: I0218 20:24:41.094513 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sdc46" podStartSLOduration=3.658837708 podStartE2EDuration="6.094497786s" podCreationTimestamp="2026-02-18 20:24:35 +0000 UTC" firstStartedPulling="2026-02-18 20:24:38.040593532 +0000 UTC m=+4037.745526237" lastFinishedPulling="2026-02-18 20:24:40.47625365 +0000 UTC m=+4040.181186315" observedRunningTime="2026-02-18 20:24:41.092681138 +0000 UTC m=+4040.797613803" watchObservedRunningTime="2026-02-18 20:24:41.094497786 +0000 UTC m=+4040.799430451" Feb 18 20:24:46 crc kubenswrapper[4942]: I0218 20:24:46.351118 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:46 crc kubenswrapper[4942]: I0218 20:24:46.351639 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:46 crc kubenswrapper[4942]: I0218 20:24:46.833474 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:47 crc kubenswrapper[4942]: I0218 20:24:47.199506 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:47 crc kubenswrapper[4942]: I0218 20:24:47.243157 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.163542 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sdc46" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="registry-server" containerID="cri-o://b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" gracePeriod=2 Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.690468 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.832473 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") pod \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.832660 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") pod \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.832763 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") pod \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.833583 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities" (OuterVolumeSpecName: "utilities") pod "c74f41b3-2cc8-42d4-90b3-e2252bed77f6" (UID: "c74f41b3-2cc8-42d4-90b3-e2252bed77f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.841835 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl" (OuterVolumeSpecName: "kube-api-access-9pgtl") pod "c74f41b3-2cc8-42d4-90b3-e2252bed77f6" (UID: "c74f41b3-2cc8-42d4-90b3-e2252bed77f6"). InnerVolumeSpecName "kube-api-access-9pgtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.879353 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c74f41b3-2cc8-42d4-90b3-e2252bed77f6" (UID: "c74f41b3-2cc8-42d4-90b3-e2252bed77f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.935159 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.935204 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.935214 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193319 4942 generic.go:334] "Generic (PLEG): container finished" podID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerID="b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" exitCode=0 Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193378 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerDied","Data":"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59"} Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193411 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerDied","Data":"c9b0fb017ab3cc6daf6a8949493b9a9ef3c2c4268df580e987c1eed967f4d2fa"} Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193450 4942 scope.go:117] "RemoveContainer" containerID="b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193674 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.252614 4942 scope.go:117] "RemoveContainer" containerID="0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.268533 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.279445 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.281372 4942 scope.go:117] "RemoveContainer" containerID="c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.352365 4942 scope.go:117] "RemoveContainer" containerID="b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" Feb 18 20:24:50 crc kubenswrapper[4942]: E0218 20:24:50.352864 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59\": container with ID starting with b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59 not found: ID does not exist" containerID="b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.352977 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59"} err="failed to get container status \"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59\": rpc error: code = NotFound desc = could not find container \"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59\": container with ID starting with b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59 not found: ID does not exist" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.353031 4942 scope.go:117] "RemoveContainer" containerID="0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47" Feb 18 20:24:50 crc kubenswrapper[4942]: E0218 20:24:50.353595 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47\": container with ID starting with 0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47 not found: ID does not exist" containerID="0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.353627 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47"} err="failed to get container status \"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47\": rpc error: code = NotFound desc = could not find container \"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47\": container with ID starting with 0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47 not found: ID does not exist" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.353649 4942 scope.go:117] "RemoveContainer" containerID="c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3" Feb 18 20:24:50 crc kubenswrapper[4942]: E0218 20:24:50.354032 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3\": container with ID starting with c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3 not found: ID does not exist" containerID="c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.354103 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3"} err="failed to get container status \"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3\": rpc error: code = NotFound desc = could not find container \"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3\": container with ID starting with c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3 not found: ID does not exist" Feb 18 20:24:51 crc kubenswrapper[4942]: I0218 20:24:51.058932 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" path="/var/lib/kubelet/pods/c74f41b3-2cc8-42d4-90b3-e2252bed77f6/volumes" Feb 18 20:24:53 crc kubenswrapper[4942]: I0218 20:24:53.035922 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:24:53 crc kubenswrapper[4942]: E0218 20:24:53.036595 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:04 crc kubenswrapper[4942]: I0218 20:25:04.036356 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:04 crc kubenswrapper[4942]: E0218 20:25:04.037332 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:19 crc kubenswrapper[4942]: I0218 20:25:19.036011 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:19 crc kubenswrapper[4942]: E0218 20:25:19.036886 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.199528 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:31 crc kubenswrapper[4942]: E0218 20:25:31.201063 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="extract-utilities" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.201099 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="extract-utilities" Feb 18 20:25:31 crc kubenswrapper[4942]: E0218 20:25:31.201182 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="extract-content" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.201203 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="extract-content" Feb 18 20:25:31 crc kubenswrapper[4942]: E0218 20:25:31.201276 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="registry-server" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.201294 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="registry-server" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.201810 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="registry-server" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.205634 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.218793 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.298900 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.299293 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.299450 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.401288 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.401423 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.401488 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.402017 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.402100 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.421840 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.534398 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:32 crc kubenswrapper[4942]: I0218 20:25:32.082848 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:32 crc kubenswrapper[4942]: I0218 20:25:32.621050 4942 generic.go:334] "Generic (PLEG): container finished" podID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerID="a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64" exitCode=0 Feb 18 20:25:32 crc kubenswrapper[4942]: I0218 20:25:32.621171 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerDied","Data":"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64"} Feb 18 20:25:32 crc kubenswrapper[4942]: I0218 20:25:32.621403 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerStarted","Data":"b62cbba7efcc3057eaed590174d862635c69b01aabde9c445fab7d27d0db35d8"} Feb 18 20:25:33 crc kubenswrapper[4942]: I0218 20:25:33.635480 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerStarted","Data":"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20"} Feb 18 20:25:34 crc kubenswrapper[4942]: I0218 20:25:34.035833 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:34 crc kubenswrapper[4942]: E0218 20:25:34.036150 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:35 crc kubenswrapper[4942]: I0218 20:25:35.656565 4942 generic.go:334] "Generic (PLEG): container finished" podID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerID="bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20" exitCode=0 Feb 18 20:25:35 crc kubenswrapper[4942]: I0218 20:25:35.656663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerDied","Data":"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20"} Feb 18 20:25:36 crc kubenswrapper[4942]: I0218 20:25:36.668580 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerStarted","Data":"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0"} Feb 18 20:25:36 crc kubenswrapper[4942]: I0218 20:25:36.690945 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5nxbp" podStartSLOduration=2.254562591 podStartE2EDuration="5.690927664s" podCreationTimestamp="2026-02-18 20:25:31 +0000 UTC" firstStartedPulling="2026-02-18 20:25:32.623537087 +0000 UTC m=+4092.328469792" lastFinishedPulling="2026-02-18 20:25:36.0599022 +0000 UTC m=+4095.764834865" observedRunningTime="2026-02-18 20:25:36.685937042 +0000 UTC m=+4096.390869717" watchObservedRunningTime="2026-02-18 20:25:36.690927664 +0000 UTC m=+4096.395860329" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.534994 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.535640 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.598455 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.791425 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.857793 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:43 crc kubenswrapper[4942]: I0218 20:25:43.738958 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5nxbp" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="registry-server" containerID="cri-o://f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" gracePeriod=2 Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.334120 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.371633 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") pod \"b14df05a-46ca-4dba-a05c-8aca88ea9643\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.371960 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") pod \"b14df05a-46ca-4dba-a05c-8aca88ea9643\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.372219 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") pod \"b14df05a-46ca-4dba-a05c-8aca88ea9643\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.382441 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities" (OuterVolumeSpecName: "utilities") pod "b14df05a-46ca-4dba-a05c-8aca88ea9643" (UID: "b14df05a-46ca-4dba-a05c-8aca88ea9643"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.400090 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89" (OuterVolumeSpecName: "kube-api-access-sgt89") pod "b14df05a-46ca-4dba-a05c-8aca88ea9643" (UID: "b14df05a-46ca-4dba-a05c-8aca88ea9643"). InnerVolumeSpecName "kube-api-access-sgt89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.464638 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b14df05a-46ca-4dba-a05c-8aca88ea9643" (UID: "b14df05a-46ca-4dba-a05c-8aca88ea9643"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.474071 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.474115 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") on node \"crc\" DevicePath \"\"" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.474143 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.748928 4942 generic.go:334] "Generic (PLEG): container finished" podID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerID="f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" exitCode=0 Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.748974 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerDied","Data":"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0"} Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.748991 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.749007 4942 scope.go:117] "RemoveContainer" containerID="f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.748996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerDied","Data":"b62cbba7efcc3057eaed590174d862635c69b01aabde9c445fab7d27d0db35d8"} Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.773799 4942 scope.go:117] "RemoveContainer" containerID="bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.793469 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.804474 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.811640 4942 scope.go:117] "RemoveContainer" containerID="a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.871166 4942 scope.go:117] "RemoveContainer" containerID="f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" Feb 18 20:25:44 crc kubenswrapper[4942]: E0218 20:25:44.873137 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0\": container with ID starting with f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0 not found: ID does not exist" containerID="f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.873186 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0"} err="failed to get container status \"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0\": rpc error: code = NotFound desc = could not find container \"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0\": container with ID starting with f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0 not found: ID does not exist" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.873212 4942 scope.go:117] "RemoveContainer" containerID="bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20" Feb 18 20:25:44 crc kubenswrapper[4942]: E0218 20:25:44.875153 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20\": container with ID starting with bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20 not found: ID does not exist" containerID="bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.875184 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20"} err="failed to get container status \"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20\": rpc error: code = NotFound desc = could not find container \"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20\": container with ID starting with bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20 not found: ID does not exist" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.875204 4942 scope.go:117] "RemoveContainer" containerID="a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64" Feb 18 20:25:44 crc kubenswrapper[4942]: E0218 20:25:44.877881 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64\": container with ID starting with a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64 not found: ID does not exist" containerID="a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.877906 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64"} err="failed to get container status \"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64\": rpc error: code = NotFound desc = could not find container \"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64\": container with ID starting with a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64 not found: ID does not exist" Feb 18 20:25:45 crc kubenswrapper[4942]: I0218 20:25:45.050037 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" path="/var/lib/kubelet/pods/b14df05a-46ca-4dba-a05c-8aca88ea9643/volumes" Feb 18 20:25:46 crc kubenswrapper[4942]: I0218 20:25:46.036358 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:46 crc kubenswrapper[4942]: E0218 20:25:46.036954 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:57 crc kubenswrapper[4942]: I0218 20:25:57.036448 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:57 crc kubenswrapper[4942]: E0218 20:25:57.037500 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:26:10 crc kubenswrapper[4942]: I0218 20:26:10.035838 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:26:10 crc kubenswrapper[4942]: E0218 20:26:10.036640 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:26:25 crc kubenswrapper[4942]: I0218 20:26:25.037063 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:26:25 crc kubenswrapper[4942]: I0218 20:26:25.313498 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c"} Feb 18 20:28:53 crc kubenswrapper[4942]: I0218 20:28:53.740896 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:28:53 crc kubenswrapper[4942]: I0218 20:28:53.741512 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:29:11 crc kubenswrapper[4942]: I0218 20:29:11.079265 4942 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-26rlh container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 20:29:11 crc kubenswrapper[4942]: I0218 20:29:11.079908 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" podUID="78f383f9-664c-43eb-9253-d9df1eaa9716" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 20:29:11 crc kubenswrapper[4942]: I0218 20:29:11.084927 4942 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-26rlh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 20:29:11 crc kubenswrapper[4942]: I0218 20:29:11.085927 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" podUID="78f383f9-664c-43eb-9253-d9df1eaa9716" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 20:29:23 crc kubenswrapper[4942]: I0218 20:29:23.741211 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:29:23 crc kubenswrapper[4942]: I0218 20:29:23.741843 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.741128 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.742823 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.742992 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.744030 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.744234 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c" gracePeriod=600 Feb 18 20:29:54 crc kubenswrapper[4942]: I0218 20:29:54.816314 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c" exitCode=0 Feb 18 20:29:54 crc kubenswrapper[4942]: I0218 20:29:54.816374 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c"} Feb 18 20:29:54 crc kubenswrapper[4942]: I0218 20:29:54.817167 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f"} Feb 18 20:29:54 crc kubenswrapper[4942]: I0218 20:29:54.817209 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.204063 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww"] Feb 18 20:30:00 crc kubenswrapper[4942]: E0218 20:30:00.205147 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="registry-server" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.205164 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="registry-server" Feb 18 20:30:00 crc kubenswrapper[4942]: E0218 20:30:00.205229 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="extract-content" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.205238 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="extract-content" Feb 18 20:30:00 crc kubenswrapper[4942]: E0218 20:30:00.205258 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="extract-utilities" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.205269 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="extract-utilities" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.205513 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="registry-server" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.206396 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.209475 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.209920 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.236327 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww"] Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.347984 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.348039 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.348090 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.450678 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.451021 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.451147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.452552 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.458168 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.471152 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.533727 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:01 crc kubenswrapper[4942]: I0218 20:30:01.074737 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww"] Feb 18 20:30:01 crc kubenswrapper[4942]: I0218 20:30:01.887268 4942 generic.go:334] "Generic (PLEG): container finished" podID="a10be051-b656-4065-834d-236e091e60e8" containerID="e91ad73bf04f5c5f2f890026f91c9070ffb5b22ca5dc09f77b422fa5636d374e" exitCode=0 Feb 18 20:30:01 crc kubenswrapper[4942]: I0218 20:30:01.887341 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" event={"ID":"a10be051-b656-4065-834d-236e091e60e8","Type":"ContainerDied","Data":"e91ad73bf04f5c5f2f890026f91c9070ffb5b22ca5dc09f77b422fa5636d374e"} Feb 18 20:30:01 crc kubenswrapper[4942]: I0218 20:30:01.888567 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" event={"ID":"a10be051-b656-4065-834d-236e091e60e8","Type":"ContainerStarted","Data":"9e464cf346f1ffc3fe57a51da12c4b293901698a8c61c333719b7f47299883ea"} Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.355372 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.513579 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") pod \"a10be051-b656-4065-834d-236e091e60e8\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.513843 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") pod \"a10be051-b656-4065-834d-236e091e60e8\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.513954 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") pod \"a10be051-b656-4065-834d-236e091e60e8\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.514302 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "a10be051-b656-4065-834d-236e091e60e8" (UID: "a10be051-b656-4065-834d-236e091e60e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.514985 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.524690 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv" (OuterVolumeSpecName: "kube-api-access-2pczv") pod "a10be051-b656-4065-834d-236e091e60e8" (UID: "a10be051-b656-4065-834d-236e091e60e8"). InnerVolumeSpecName "kube-api-access-2pczv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.526138 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a10be051-b656-4065-834d-236e091e60e8" (UID: "a10be051-b656-4065-834d-236e091e60e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.616858 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.617113 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.911736 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" event={"ID":"a10be051-b656-4065-834d-236e091e60e8","Type":"ContainerDied","Data":"9e464cf346f1ffc3fe57a51da12c4b293901698a8c61c333719b7f47299883ea"} Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.911807 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e464cf346f1ffc3fe57a51da12c4b293901698a8c61c333719b7f47299883ea" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.911924 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:04 crc kubenswrapper[4942]: I0218 20:30:04.454791 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 20:30:04 crc kubenswrapper[4942]: I0218 20:30:04.470374 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 20:30:05 crc kubenswrapper[4942]: I0218 20:30:05.050415 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f02d65f2-f70f-4982-a9d5-fc9d75091181" path="/var/lib/kubelet/pods/f02d65f2-f70f-4982-a9d5-fc9d75091181/volumes" Feb 18 20:30:50 crc kubenswrapper[4942]: I0218 20:30:50.924917 4942 scope.go:117] "RemoveContainer" containerID="bdd33fc87e63584fee347049c15193b1ff470c22181f3d250c7e0de28ba81fd9" Feb 18 20:32:23 crc kubenswrapper[4942]: I0218 20:32:23.741846 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:32:23 crc kubenswrapper[4942]: I0218 20:32:23.742379 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:32:53 crc kubenswrapper[4942]: I0218 20:32:53.740693 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:32:53 crc kubenswrapper[4942]: I0218 20:32:53.741233 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.741467 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.742090 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.742155 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.743154 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.743248 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" gracePeriod=600 Feb 18 20:33:23 crc kubenswrapper[4942]: E0218 20:33:23.864808 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:33:24 crc kubenswrapper[4942]: I0218 20:33:24.257618 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" exitCode=0 Feb 18 20:33:24 crc kubenswrapper[4942]: I0218 20:33:24.257682 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f"} Feb 18 20:33:24 crc kubenswrapper[4942]: I0218 20:33:24.257731 4942 scope.go:117] "RemoveContainer" containerID="ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c" Feb 18 20:33:24 crc kubenswrapper[4942]: I0218 20:33:24.258740 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:33:24 crc kubenswrapper[4942]: E0218 20:33:24.259256 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:33:37 crc kubenswrapper[4942]: I0218 20:33:37.036193 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:33:37 crc kubenswrapper[4942]: E0218 20:33:37.037301 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:33:52 crc kubenswrapper[4942]: I0218 20:33:52.036386 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:33:52 crc kubenswrapper[4942]: E0218 20:33:52.037556 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.036664 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:34:07 crc kubenswrapper[4942]: E0218 20:34:07.037627 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.322923 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bpf4n"] Feb 18 20:34:07 crc kubenswrapper[4942]: E0218 20:34:07.323393 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10be051-b656-4065-834d-236e091e60e8" containerName="collect-profiles" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.323415 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10be051-b656-4065-834d-236e091e60e8" containerName="collect-profiles" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.323675 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10be051-b656-4065-834d-236e091e60e8" containerName="collect-profiles" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.325368 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.339939 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpf4n"] Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.433041 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-utilities\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.433153 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqdr\" (UniqueName: \"kubernetes.io/projected/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-kube-api-access-ztqdr\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.433223 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-catalog-content\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.535670 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-utilities\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.535779 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqdr\" (UniqueName: \"kubernetes.io/projected/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-kube-api-access-ztqdr\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.535856 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-catalog-content\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.537312 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-utilities\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.537982 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-catalog-content\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.575482 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqdr\" (UniqueName: \"kubernetes.io/projected/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-kube-api-access-ztqdr\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.741615 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:08 crc kubenswrapper[4942]: I0218 20:34:08.295823 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpf4n"] Feb 18 20:34:08 crc kubenswrapper[4942]: I0218 20:34:08.803160 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpf4n" event={"ID":"8277de0c-d81c-4d35-a68a-97ca7a1edd6b","Type":"ContainerStarted","Data":"781d76d1512bab16a1778d0f049717a35cc14741855ad3e1da58ce4ed191e1e2"} Feb 18 20:34:09 crc kubenswrapper[4942]: I0218 20:34:09.819933 4942 generic.go:334] "Generic (PLEG): container finished" podID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" containerID="e67ac2c620953275a3382ee0f7606ec0062a3b0d8a79dfa6e97d84b9d29351b8" exitCode=0 Feb 18 20:34:09 crc kubenswrapper[4942]: I0218 20:34:09.820060 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpf4n" event={"ID":"8277de0c-d81c-4d35-a68a-97ca7a1edd6b","Type":"ContainerDied","Data":"e67ac2c620953275a3382ee0f7606ec0062a3b0d8a79dfa6e97d84b9d29351b8"} Feb 18 20:34:09 crc kubenswrapper[4942]: I0218 20:34:09.822807 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:34:10 crc kubenswrapper[4942]: E0218 20:34:10.329430 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:34:10 crc kubenswrapper[4942]: E0218 20:34:10.329595 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:10 crc kubenswrapper[4942]: E0218 20:34:10.330780 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:10 crc kubenswrapper[4942]: E0218 20:34:10.833611 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:20 crc kubenswrapper[4942]: I0218 20:34:20.036247 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:34:20 crc kubenswrapper[4942]: E0218 20:34:20.038520 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:24 crc kubenswrapper[4942]: E0218 20:34:24.711234 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:34:24 crc kubenswrapper[4942]: E0218 20:34:24.712196 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:24 crc kubenswrapper[4942]: E0218 20:34:24.713517 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.411388 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-scbxm"] Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.414789 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.425694 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scbxm"] Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.597493 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-utilities\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.597539 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-catalog-content\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.597836 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cssts\" (UniqueName: \"kubernetes.io/projected/6b509214-59a6-4d42-9b1b-a0252c545c1d-kube-api-access-cssts\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.699900 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-utilities\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.699960 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-catalog-content\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.700124 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cssts\" (UniqueName: \"kubernetes.io/projected/6b509214-59a6-4d42-9b1b-a0252c545c1d-kube-api-access-cssts\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.701049 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-catalog-content\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.703036 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-utilities\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.723345 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cssts\" (UniqueName: \"kubernetes.io/projected/6b509214-59a6-4d42-9b1b-a0252c545c1d-kube-api-access-cssts\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.745311 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:26 crc kubenswrapper[4942]: I0218 20:34:26.231373 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scbxm"] Feb 18 20:34:26 crc kubenswrapper[4942]: I0218 20:34:26.997250 4942 generic.go:334] "Generic (PLEG): container finished" podID="6b509214-59a6-4d42-9b1b-a0252c545c1d" containerID="93f8d81b8be45787cf6fc4b813deaf1827a837361f9a82316159f70be9ac2fc1" exitCode=0 Feb 18 20:34:27 crc kubenswrapper[4942]: I0218 20:34:26.997316 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxm" event={"ID":"6b509214-59a6-4d42-9b1b-a0252c545c1d","Type":"ContainerDied","Data":"93f8d81b8be45787cf6fc4b813deaf1827a837361f9a82316159f70be9ac2fc1"} Feb 18 20:34:27 crc kubenswrapper[4942]: I0218 20:34:27.001157 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxm" event={"ID":"6b509214-59a6-4d42-9b1b-a0252c545c1d","Type":"ContainerStarted","Data":"82176acbd80e8d50d3a8dcf73cd791e49339bba80d1a93a7a1da46d8f8dd41f8"} Feb 18 20:34:27 crc kubenswrapper[4942]: E0218 20:34:27.951859 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:34:27 crc kubenswrapper[4942]: E0218 20:34:27.952350 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:27 crc kubenswrapper[4942]: E0218 20:34:27.953580 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:34:28 crc kubenswrapper[4942]: E0218 20:34:28.023103 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:34:34 crc kubenswrapper[4942]: I0218 20:34:34.036397 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:34:34 crc kubenswrapper[4942]: E0218 20:34:34.037589 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:38 crc kubenswrapper[4942]: E0218 20:34:38.039213 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:42 crc kubenswrapper[4942]: E0218 20:34:42.514977 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:34:42 crc kubenswrapper[4942]: E0218 20:34:42.515896 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:42 crc kubenswrapper[4942]: E0218 20:34:42.517225 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:34:49 crc kubenswrapper[4942]: I0218 20:34:49.036279 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:34:49 crc kubenswrapper[4942]: E0218 20:34:49.037130 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:49 crc kubenswrapper[4942]: E0218 20:34:49.814678 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:34:49 crc kubenswrapper[4942]: E0218 20:34:49.815119 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:49 crc kubenswrapper[4942]: E0218 20:34:49.816365 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:57 crc kubenswrapper[4942]: E0218 20:34:57.038876 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:02 crc kubenswrapper[4942]: E0218 20:35:02.039536 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:35:04 crc kubenswrapper[4942]: I0218 20:35:04.037110 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:04 crc kubenswrapper[4942]: E0218 20:35:04.038041 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:11 crc kubenswrapper[4942]: E0218 20:35:11.097147 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:35:11 crc kubenswrapper[4942]: E0218 20:35:11.097902 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:11 crc kubenswrapper[4942]: E0218 20:35:11.099119 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:13 crc kubenswrapper[4942]: E0218 20:35:13.039712 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:35:16 crc kubenswrapper[4942]: I0218 20:35:16.036429 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:16 crc kubenswrapper[4942]: E0218 20:35:16.037320 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:23 crc kubenswrapper[4942]: E0218 20:35:23.038201 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:26 crc kubenswrapper[4942]: E0218 20:35:26.039274 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:35:27 crc kubenswrapper[4942]: I0218 20:35:27.036569 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:27 crc kubenswrapper[4942]: E0218 20:35:27.037231 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:36 crc kubenswrapper[4942]: E0218 20:35:36.041995 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.515962 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lc5d"] Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.519991 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.535133 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lc5d"] Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.681449 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqcsp\" (UniqueName: \"kubernetes.io/projected/618efece-b48e-4e8d-baef-15eb25017938-kube-api-access-nqcsp\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.681517 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-catalog-content\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.681693 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-utilities\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.783503 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqcsp\" (UniqueName: \"kubernetes.io/projected/618efece-b48e-4e8d-baef-15eb25017938-kube-api-access-nqcsp\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.783559 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-catalog-content\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.783631 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-utilities\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.784230 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-utilities\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.784482 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-catalog-content\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.816677 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqcsp\" (UniqueName: \"kubernetes.io/projected/618efece-b48e-4e8d-baef-15eb25017938-kube-api-access-nqcsp\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.854247 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:39 crc kubenswrapper[4942]: I0218 20:35:39.414754 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lc5d"] Feb 18 20:35:39 crc kubenswrapper[4942]: I0218 20:35:39.834716 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lc5d" event={"ID":"618efece-b48e-4e8d-baef-15eb25017938","Type":"ContainerStarted","Data":"c4e9d76c330550eb4d99211ab40437e6c91931f8c18e6fd30c8ca251b4f351a0"} Feb 18 20:35:40 crc kubenswrapper[4942]: I0218 20:35:40.850101 4942 generic.go:334] "Generic (PLEG): container finished" podID="618efece-b48e-4e8d-baef-15eb25017938" containerID="28e845546a64e1540e9cd180df23419bbad8ebf7eb82a48eaf7341a83acac702" exitCode=0 Feb 18 20:35:40 crc kubenswrapper[4942]: I0218 20:35:40.850185 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lc5d" event={"ID":"618efece-b48e-4e8d-baef-15eb25017938","Type":"ContainerDied","Data":"28e845546a64e1540e9cd180df23419bbad8ebf7eb82a48eaf7341a83acac702"} Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.061526 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.062283 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.496812 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fssdt"] Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.498946 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.535225 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fssdt"] Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.656943 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-utilities\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.657039 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-catalog-content\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.657573 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8skx\" (UniqueName: \"kubernetes.io/projected/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-kube-api-access-x8skx\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.666908 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.667082 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.668272 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759055 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8skx\" (UniqueName: \"kubernetes.io/projected/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-kube-api-access-x8skx\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759154 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-utilities\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759214 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-catalog-content\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759878 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-catalog-content\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759892 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-utilities\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.861077 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:35:42 crc kubenswrapper[4942]: I0218 20:35:42.468986 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8skx\" (UniqueName: \"kubernetes.io/projected/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-kube-api-access-x8skx\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:42 crc kubenswrapper[4942]: I0218 20:35:42.725631 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:43 crc kubenswrapper[4942]: I0218 20:35:43.118425 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fssdt"] Feb 18 20:35:43 crc kubenswrapper[4942]: W0218 20:35:43.123883 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9fb128b_71df_4bd4_8e7c_6494714c5a0c.slice/crio-2e20291057bbeb9e1e0db649b4b0aca356b84913d0b8de85162b4ab533dc0484 WatchSource:0}: Error finding container 2e20291057bbeb9e1e0db649b4b0aca356b84913d0b8de85162b4ab533dc0484: Status 404 returned error can't find the container with id 2e20291057bbeb9e1e0db649b4b0aca356b84913d0b8de85162b4ab533dc0484 Feb 18 20:35:43 crc kubenswrapper[4942]: I0218 20:35:43.893456 4942 generic.go:334] "Generic (PLEG): container finished" podID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" containerID="799084e5d6096a3d77015453eca62d3d9ef509516f5bf2db78bfc07d368af996" exitCode=0 Feb 18 20:35:43 crc kubenswrapper[4942]: I0218 20:35:43.893755 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fssdt" event={"ID":"a9fb128b-71df-4bd4-8e7c-6494714c5a0c","Type":"ContainerDied","Data":"799084e5d6096a3d77015453eca62d3d9ef509516f5bf2db78bfc07d368af996"} Feb 18 20:35:43 crc kubenswrapper[4942]: I0218 20:35:43.893811 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fssdt" event={"ID":"a9fb128b-71df-4bd4-8e7c-6494714c5a0c","Type":"ContainerStarted","Data":"2e20291057bbeb9e1e0db649b4b0aca356b84913d0b8de85162b4ab533dc0484"} Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.187234 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.187829 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.189528 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.804114 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.804620 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.806340 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.922947 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:35:48 crc kubenswrapper[4942]: E0218 20:35:48.038102 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:52 crc kubenswrapper[4942]: I0218 20:35:52.036934 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:52 crc kubenswrapper[4942]: E0218 20:35:52.037702 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:58 crc kubenswrapper[4942]: E0218 20:35:58.417483 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:35:58 crc kubenswrapper[4942]: E0218 20:35:58.418622 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:58 crc kubenswrapper[4942]: E0218 20:35:58.420311 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:35:59 crc kubenswrapper[4942]: E0218 20:35:59.038614 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:00 crc kubenswrapper[4942]: E0218 20:36:00.025085 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:36:00 crc kubenswrapper[4942]: E0218 20:36:00.025262 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:00 crc kubenswrapper[4942]: E0218 20:36:00.026513 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:02 crc kubenswrapper[4942]: E0218 20:36:02.706671 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:36:02 crc kubenswrapper[4942]: E0218 20:36:02.707084 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:02 crc kubenswrapper[4942]: E0218 20:36:02.708378 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:07 crc kubenswrapper[4942]: I0218 20:36:07.036297 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:07 crc kubenswrapper[4942]: E0218 20:36:07.037141 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:09 crc kubenswrapper[4942]: E0218 20:36:09.039126 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:36:12 crc kubenswrapper[4942]: E0218 20:36:12.039216 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:12 crc kubenswrapper[4942]: E0218 20:36:12.039633 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:17 crc kubenswrapper[4942]: E0218 20:36:17.037856 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:19 crc kubenswrapper[4942]: I0218 20:36:19.035746 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:19 crc kubenswrapper[4942]: E0218 20:36:19.036489 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.314663 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.315413 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.316700 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.784071 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.784539 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.785707 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:25 crc kubenswrapper[4942]: I0218 20:36:25.399164 4942 generic.go:334] "Generic (PLEG): container finished" podID="498a3ae0-adb2-4729-a2eb-78e267e1613b" containerID="169b9c7b6b3a31907bbb5568c6300b81731785a07744ed74ff40a7d3cf050f29" exitCode=1 Feb 18 20:36:25 crc kubenswrapper[4942]: I0218 20:36:25.399288 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"498a3ae0-adb2-4729-a2eb-78e267e1613b","Type":"ContainerDied","Data":"169b9c7b6b3a31907bbb5568c6300b81731785a07744ed74ff40a7d3cf050f29"} Feb 18 20:36:26 crc kubenswrapper[4942]: E0218 20:36:26.040598 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:26 crc kubenswrapper[4942]: I0218 20:36:26.961315 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007013 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007124 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007178 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007204 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007231 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007264 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007349 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007410 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007442 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007910 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data" (OuterVolumeSpecName: "config-data") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.008192 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.008812 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.013968 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.022011 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv" (OuterVolumeSpecName: "kube-api-access-nnpsv") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "kube-api-access-nnpsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.038947 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.061389 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.075919 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.089523 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.102484 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110442 4942 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110508 4942 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110534 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110548 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110563 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110576 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110587 4942 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110601 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.136542 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.212921 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.421025 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"498a3ae0-adb2-4729-a2eb-78e267e1613b","Type":"ContainerDied","Data":"4638cc0d3971f910691e7e7ad60b86d01493160078b4f86a07d3570748f42e2f"} Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.421084 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4638cc0d3971f910691e7e7ad60b86d01493160078b4f86a07d3570748f42e2f" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.421106 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.961102 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 20:36:28 crc kubenswrapper[4942]: E0218 20:36:28.962035 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498a3ae0-adb2-4729-a2eb-78e267e1613b" containerName="tempest-tests-tempest-tests-runner" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.962059 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="498a3ae0-adb2-4729-a2eb-78e267e1613b" containerName="tempest-tests-tempest-tests-runner" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.962406 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="498a3ae0-adb2-4729-a2eb-78e267e1613b" containerName="tempest-tests-tempest-tests-runner" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.963484 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.966326 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fgwq4" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.975934 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 20:36:29 crc kubenswrapper[4942]: E0218 20:36:29.038106 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.052606 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.052706 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp2tv\" (UniqueName: \"kubernetes.io/projected/fa910027-8bd8-4779-9dc5-9071534fa252-kube-api-access-lp2tv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.154464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp2tv\" (UniqueName: \"kubernetes.io/projected/fa910027-8bd8-4779-9dc5-9071534fa252-kube-api-access-lp2tv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.154747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.155072 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.177106 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp2tv\" (UniqueName: \"kubernetes.io/projected/fa910027-8bd8-4779-9dc5-9071534fa252-kube-api-access-lp2tv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.178573 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.303646 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.794746 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 20:36:30 crc kubenswrapper[4942]: I0218 20:36:30.458971 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fa910027-8bd8-4779-9dc5-9071534fa252","Type":"ContainerStarted","Data":"4fa10290a286dccbab8e982b7fe69b9138d97c789c261ecc98b3a52a30c71931"} Feb 18 20:36:30 crc kubenswrapper[4942]: E0218 20:36:30.789505 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:36:30 crc kubenswrapper[4942]: E0218 20:36:30.789946 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:30 crc kubenswrapper[4942]: E0218 20:36:30.791130 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:36:31 crc kubenswrapper[4942]: E0218 20:36:31.473642 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:36:32 crc kubenswrapper[4942]: I0218 20:36:32.036549 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:32 crc kubenswrapper[4942]: E0218 20:36:32.036970 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:37 crc kubenswrapper[4942]: E0218 20:36:37.039168 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:38 crc kubenswrapper[4942]: E0218 20:36:38.038693 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:36:40 crc kubenswrapper[4942]: E0218 20:36:40.038664 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:41 crc kubenswrapper[4942]: E0218 20:36:41.069002 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:43 crc kubenswrapper[4942]: I0218 20:36:43.036129 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:43 crc kubenswrapper[4942]: E0218 20:36:43.037352 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:44 crc kubenswrapper[4942]: E0218 20:36:44.481846 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:36:44 crc kubenswrapper[4942]: E0218 20:36:44.482078 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:44 crc kubenswrapper[4942]: E0218 20:36:44.484426 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:36:49 crc kubenswrapper[4942]: E0218 20:36:49.039359 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:36:51 crc kubenswrapper[4942]: E0218 20:36:51.055753 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:52 crc kubenswrapper[4942]: E0218 20:36:52.038226 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:54 crc kubenswrapper[4942]: I0218 20:36:54.036183 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:54 crc kubenswrapper[4942]: E0218 20:36:54.036597 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:54 crc kubenswrapper[4942]: E0218 20:36:54.038277 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:58 crc kubenswrapper[4942]: E0218 20:36:58.038961 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:02 crc kubenswrapper[4942]: E0218 20:37:02.039691 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:02 crc kubenswrapper[4942]: E0218 20:37:02.039732 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:06 crc kubenswrapper[4942]: E0218 20:37:06.022545 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:37:06 crc kubenswrapper[4942]: E0218 20:37:06.023096 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:06 crc kubenswrapper[4942]: E0218 20:37:06.024404 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:07 crc kubenswrapper[4942]: I0218 20:37:07.037753 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:07 crc kubenswrapper[4942]: E0218 20:37:07.038358 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:37:11 crc kubenswrapper[4942]: E0218 20:37:11.282884 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:37:11 crc kubenswrapper[4942]: E0218 20:37:11.283540 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:11 crc kubenswrapper[4942]: E0218 20:37:11.284754 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:37:13 crc kubenswrapper[4942]: E0218 20:37:13.833893 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:37:13 crc kubenswrapper[4942]: E0218 20:37:13.834336 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:13 crc kubenswrapper[4942]: E0218 20:37:13.835510 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:14 crc kubenswrapper[4942]: E0218 20:37:14.037507 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:17 crc kubenswrapper[4942]: E0218 20:37:17.194455 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:37:17 crc kubenswrapper[4942]: E0218 20:37:17.194641 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:17 crc kubenswrapper[4942]: E0218 20:37:17.195971 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:18 crc kubenswrapper[4942]: E0218 20:37:18.038389 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:20 crc kubenswrapper[4942]: I0218 20:37:20.036584 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:20 crc kubenswrapper[4942]: E0218 20:37:20.037165 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:37:26 crc kubenswrapper[4942]: E0218 20:37:26.039951 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:37:27 crc kubenswrapper[4942]: E0218 20:37:27.038100 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:28 crc kubenswrapper[4942]: E0218 20:37:28.616370 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:37:28 crc kubenswrapper[4942]: E0218 20:37:28.616754 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:28 crc kubenswrapper[4942]: E0218 20:37:28.618002 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:29 crc kubenswrapper[4942]: E0218 20:37:29.037294 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:30 crc kubenswrapper[4942]: E0218 20:37:30.038411 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:32 crc kubenswrapper[4942]: I0218 20:37:32.036682 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:32 crc kubenswrapper[4942]: E0218 20:37:32.038348 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:37:37 crc kubenswrapper[4942]: E0218 20:37:37.038717 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:37:40 crc kubenswrapper[4942]: E0218 20:37:40.038506 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:40 crc kubenswrapper[4942]: E0218 20:37:40.040919 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:41 crc kubenswrapper[4942]: E0218 20:37:41.053393 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:43 crc kubenswrapper[4942]: E0218 20:37:43.040053 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:44 crc kubenswrapper[4942]: I0218 20:37:44.035829 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:44 crc kubenswrapper[4942]: E0218 20:37:44.036499 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:37:51 crc kubenswrapper[4942]: E0218 20:37:51.069473 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:52 crc kubenswrapper[4942]: E0218 20:37:52.039517 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:52 crc kubenswrapper[4942]: E0218 20:37:52.039540 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:52 crc kubenswrapper[4942]: E0218 20:37:52.040611 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:37:55 crc kubenswrapper[4942]: E0218 20:37:55.040213 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:58 crc kubenswrapper[4942]: I0218 20:37:58.035976 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:58 crc kubenswrapper[4942]: E0218 20:37:58.037043 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:38:03 crc kubenswrapper[4942]: E0218 20:38:03.043140 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:03 crc kubenswrapper[4942]: E0218 20:38:03.045286 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:38:05 crc kubenswrapper[4942]: E0218 20:38:05.038633 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:08 crc kubenswrapper[4942]: E0218 20:38:08.039842 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:38:08 crc kubenswrapper[4942]: E0218 20:38:08.331171 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:38:08 crc kubenswrapper[4942]: E0218 20:38:08.331405 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:08 crc kubenswrapper[4942]: E0218 20:38:08.332693 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:38:10 crc kubenswrapper[4942]: I0218 20:38:10.036272 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:38:10 crc kubenswrapper[4942]: E0218 20:38:10.036980 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:38:16 crc kubenswrapper[4942]: E0218 20:38:16.041823 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:38:16 crc kubenswrapper[4942]: E0218 20:38:16.041867 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:18 crc kubenswrapper[4942]: E0218 20:38:18.038904 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:20 crc kubenswrapper[4942]: E0218 20:38:20.038593 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:38:20 crc kubenswrapper[4942]: E0218 20:38:20.038977 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:38:25 crc kubenswrapper[4942]: I0218 20:38:25.036676 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:38:25 crc kubenswrapper[4942]: I0218 20:38:25.765193 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"9bddad80fb30b139a11793b4f0ba955a9abbd5925ded5b3df80be5dbd41f27ba"} Feb 18 20:38:30 crc kubenswrapper[4942]: E0218 20:38:30.039666 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:31 crc kubenswrapper[4942]: E0218 20:38:31.038149 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:38:31 crc kubenswrapper[4942]: E0218 20:38:31.052932 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:38:33 crc kubenswrapper[4942]: E0218 20:38:33.038788 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:35 crc kubenswrapper[4942]: E0218 20:38:35.686688 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:38:35 crc kubenswrapper[4942]: E0218 20:38:35.687453 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:35 crc kubenswrapper[4942]: E0218 20:38:35.688825 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:38:41 crc kubenswrapper[4942]: E0218 20:38:41.054528 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:44 crc kubenswrapper[4942]: E0218 20:38:44.038454 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:45 crc kubenswrapper[4942]: E0218 20:38:45.037907 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:38:47 crc kubenswrapper[4942]: E0218 20:38:47.967512 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:38:47 crc kubenswrapper[4942]: E0218 20:38:47.968020 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:47 crc kubenswrapper[4942]: E0218 20:38:47.969256 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:38:50 crc kubenswrapper[4942]: E0218 20:38:50.043365 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.050617 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.052430 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.060118 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vfphw"/"kube-root-ca.crt" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.060121 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vfphw"/"openshift-service-ca.crt" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.095736 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.133796 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.133970 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.236143 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.236342 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.236721 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.255714 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.379055 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.870370 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:38:51 crc kubenswrapper[4942]: I0218 20:38:51.082503 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfphw/must-gather-kzsbl" event={"ID":"96ac2dba-808d-4839-9f27-9c77b6d1f97d","Type":"ContainerStarted","Data":"35d1f533ad495def710ab683242fdb7a66e65bd3343a40e16806102801f7451c"} Feb 18 20:38:51 crc kubenswrapper[4942]: E0218 20:38:51.521672 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Feb 18 20:38:51 crc kubenswrapper[4942]: E0218 20:38:51.521896 4942 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 20:38:51 crc kubenswrapper[4942]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Feb 18 20:38:51 crc kubenswrapper[4942]: HAVE_SESSION_TOOLS=true Feb 18 20:38:51 crc kubenswrapper[4942]: else Feb 18 20:38:51 crc kubenswrapper[4942]: HAVE_SESSION_TOOLS=false Feb 18 20:38:51 crc kubenswrapper[4942]: fi Feb 18 20:38:51 crc kubenswrapper[4942]: Feb 18 20:38:51 crc kubenswrapper[4942]: Feb 18 20:38:51 crc kubenswrapper[4942]: echo "[disk usage checker] Started" Feb 18 20:38:51 crc kubenswrapper[4942]: target_dir="/must-gather" Feb 18 20:38:51 crc kubenswrapper[4942]: usage_percentage_limit="80" Feb 18 20:38:51 crc kubenswrapper[4942]: while true; do Feb 18 20:38:51 crc kubenswrapper[4942]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Feb 18 20:38:51 crc kubenswrapper[4942]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Feb 18 20:38:51 crc kubenswrapper[4942]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Feb 18 20:38:51 crc kubenswrapper[4942]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Feb 18 20:38:51 crc kubenswrapper[4942]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:38:51 crc kubenswrapper[4942]: ps -o sess --no-headers | sort -u | while read sid; do Feb 18 20:38:51 crc kubenswrapper[4942]: [[ "$sid" -eq "${$}" ]] && continue Feb 18 20:38:51 crc kubenswrapper[4942]: pkill --signal SIGKILL --session "$sid" Feb 18 20:38:51 crc kubenswrapper[4942]: done Feb 18 20:38:51 crc kubenswrapper[4942]: else Feb 18 20:38:51 crc kubenswrapper[4942]: kill 0 Feb 18 20:38:51 crc kubenswrapper[4942]: fi Feb 18 20:38:51 crc kubenswrapper[4942]: exit 1 Feb 18 20:38:51 crc kubenswrapper[4942]: fi Feb 18 20:38:51 crc kubenswrapper[4942]: sleep 5 Feb 18 20:38:51 crc kubenswrapper[4942]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:38:51 crc kubenswrapper[4942]: setsid -w bash <<-MUSTGATHER_EOF Feb 18 20:38:51 crc kubenswrapper[4942]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:38:51 crc kubenswrapper[4942]: MUSTGATHER_EOF Feb 18 20:38:51 crc kubenswrapper[4942]: else Feb 18 20:38:51 crc kubenswrapper[4942]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:38:51 crc kubenswrapper[4942]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-kzsbl_openshift-must-gather-vfphw(96ac2dba-808d-4839-9f27-9c77b6d1f97d): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error Feb 18 20:38:51 crc kubenswrapper[4942]: > logger="UnhandledError" Feb 18 20:38:51 crc kubenswrapper[4942]: E0218 20:38:51.524113 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-vfphw/must-gather-kzsbl" podUID="96ac2dba-808d-4839-9f27-9c77b6d1f97d" Feb 18 20:38:52 crc kubenswrapper[4942]: E0218 20:38:52.096264 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-vfphw/must-gather-kzsbl" podUID="96ac2dba-808d-4839-9f27-9c77b6d1f97d" Feb 18 20:38:54 crc kubenswrapper[4942]: E0218 20:38:54.039158 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:56 crc kubenswrapper[4942]: E0218 20:38:56.036999 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:58 crc kubenswrapper[4942]: E0218 20:38:58.038452 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:00 crc kubenswrapper[4942]: E0218 20:39:00.039508 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:02 crc kubenswrapper[4942]: E0218 20:39:02.039570 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:03 crc kubenswrapper[4942]: E0218 20:39:03.512189 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Feb 18 20:39:03 crc kubenswrapper[4942]: E0218 20:39:03.512343 4942 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 20:39:03 crc kubenswrapper[4942]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Feb 18 20:39:03 crc kubenswrapper[4942]: HAVE_SESSION_TOOLS=true Feb 18 20:39:03 crc kubenswrapper[4942]: else Feb 18 20:39:03 crc kubenswrapper[4942]: HAVE_SESSION_TOOLS=false Feb 18 20:39:03 crc kubenswrapper[4942]: fi Feb 18 20:39:03 crc kubenswrapper[4942]: Feb 18 20:39:03 crc kubenswrapper[4942]: Feb 18 20:39:03 crc kubenswrapper[4942]: echo "[disk usage checker] Started" Feb 18 20:39:03 crc kubenswrapper[4942]: target_dir="/must-gather" Feb 18 20:39:03 crc kubenswrapper[4942]: usage_percentage_limit="80" Feb 18 20:39:03 crc kubenswrapper[4942]: while true; do Feb 18 20:39:03 crc kubenswrapper[4942]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Feb 18 20:39:03 crc kubenswrapper[4942]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Feb 18 20:39:03 crc kubenswrapper[4942]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Feb 18 20:39:03 crc kubenswrapper[4942]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Feb 18 20:39:03 crc kubenswrapper[4942]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:39:03 crc kubenswrapper[4942]: ps -o sess --no-headers | sort -u | while read sid; do Feb 18 20:39:03 crc kubenswrapper[4942]: [[ "$sid" -eq "${$}" ]] && continue Feb 18 20:39:03 crc kubenswrapper[4942]: pkill --signal SIGKILL --session "$sid" Feb 18 20:39:03 crc kubenswrapper[4942]: done Feb 18 20:39:03 crc kubenswrapper[4942]: else Feb 18 20:39:03 crc kubenswrapper[4942]: kill 0 Feb 18 20:39:03 crc kubenswrapper[4942]: fi Feb 18 20:39:03 crc kubenswrapper[4942]: exit 1 Feb 18 20:39:03 crc kubenswrapper[4942]: fi Feb 18 20:39:03 crc kubenswrapper[4942]: sleep 5 Feb 18 20:39:03 crc kubenswrapper[4942]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:39:03 crc kubenswrapper[4942]: setsid -w bash <<-MUSTGATHER_EOF Feb 18 20:39:03 crc kubenswrapper[4942]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:39:03 crc kubenswrapper[4942]: MUSTGATHER_EOF Feb 18 20:39:03 crc kubenswrapper[4942]: else Feb 18 20:39:03 crc kubenswrapper[4942]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:39:03 crc kubenswrapper[4942]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-kzsbl_openshift-must-gather-vfphw(96ac2dba-808d-4839-9f27-9c77b6d1f97d): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error Feb 18 20:39:03 crc kubenswrapper[4942]: > logger="UnhandledError" Feb 18 20:39:03 crc kubenswrapper[4942]: E0218 20:39:03.514499 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-vfphw/must-gather-kzsbl" podUID="96ac2dba-808d-4839-9f27-9c77b6d1f97d" Feb 18 20:39:06 crc kubenswrapper[4942]: I0218 20:39:06.327589 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:39:06 crc kubenswrapper[4942]: I0218 20:39:06.339049 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:39:07 crc kubenswrapper[4942]: E0218 20:39:07.038294 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.100295 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.150320 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") pod \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.150403 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") pod \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.151893 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "96ac2dba-808d-4839-9f27-9c77b6d1f97d" (UID: "96ac2dba-808d-4839-9f27-9c77b6d1f97d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.188700 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk" (OuterVolumeSpecName: "kube-api-access-x25bk") pod "96ac2dba-808d-4839-9f27-9c77b6d1f97d" (UID: "96ac2dba-808d-4839-9f27-9c77b6d1f97d"). InnerVolumeSpecName "kube-api-access-x25bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.253234 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.253270 4942 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.281904 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:39:09 crc kubenswrapper[4942]: E0218 20:39:09.040218 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:09 crc kubenswrapper[4942]: I0218 20:39:09.050112 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96ac2dba-808d-4839-9f27-9c77b6d1f97d" path="/var/lib/kubelet/pods/96ac2dba-808d-4839-9f27-9c77b6d1f97d/volumes" Feb 18 20:39:11 crc kubenswrapper[4942]: E0218 20:39:11.062200 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:14 crc kubenswrapper[4942]: E0218 20:39:14.040918 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:15 crc kubenswrapper[4942]: E0218 20:39:15.039887 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:20 crc kubenswrapper[4942]: E0218 20:39:20.038978 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:39:21 crc kubenswrapper[4942]: E0218 20:39:21.052976 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:24 crc kubenswrapper[4942]: E0218 20:39:24.063888 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:26 crc kubenswrapper[4942]: E0218 20:39:26.038628 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:29 crc kubenswrapper[4942]: E0218 20:39:29.039589 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:32 crc kubenswrapper[4942]: E0218 20:39:32.038878 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:39:34 crc kubenswrapper[4942]: E0218 20:39:34.038870 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:37 crc kubenswrapper[4942]: I0218 20:39:37.037969 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:39:37 crc kubenswrapper[4942]: E0218 20:39:37.716513 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:39:37 crc kubenswrapper[4942]: E0218 20:39:37.716654 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:39:37 crc kubenswrapper[4942]: E0218 20:39:37.718219 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:41 crc kubenswrapper[4942]: E0218 20:39:41.056681 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:42 crc kubenswrapper[4942]: E0218 20:39:42.039877 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:44 crc kubenswrapper[4942]: E0218 20:39:44.038712 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:39:47 crc kubenswrapper[4942]: E0218 20:39:47.037813 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:52 crc kubenswrapper[4942]: E0218 20:39:52.038277 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:53 crc kubenswrapper[4942]: E0218 20:39:53.038383 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:54 crc kubenswrapper[4942]: E0218 20:39:54.039044 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:59 crc kubenswrapper[4942]: E0218 20:39:59.038938 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:59 crc kubenswrapper[4942]: E0218 20:39:59.048388 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:39:59 crc kubenswrapper[4942]: E0218 20:39:59.048541 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:39:59 crc kubenswrapper[4942]: E0218 20:39:59.049820 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:40:05 crc kubenswrapper[4942]: E0218 20:40:05.039549 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:40:06 crc kubenswrapper[4942]: E0218 20:40:06.038101 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:40:09 crc kubenswrapper[4942]: E0218 20:40:09.039190 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:40:13 crc kubenswrapper[4942]: E0218 20:40:13.040305 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:40:14 crc kubenswrapper[4942]: E0218 20:40:14.706338 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:40:14 crc kubenswrapper[4942]: E0218 20:40:14.706877 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:40:14 crc kubenswrapper[4942]: E0218 20:40:14.708060 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:40:19 crc kubenswrapper[4942]: E0218 20:40:19.039122 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:40:20 crc kubenswrapper[4942]: E0218 20:40:20.038705 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:40:21 crc kubenswrapper[4942]: E0218 20:40:21.053864 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:40:28 crc kubenswrapper[4942]: E0218 20:40:28.039567 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:40:28 crc kubenswrapper[4942]: E0218 20:40:28.039966 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:40:31 crc kubenswrapper[4942]: E0218 20:40:31.058303 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:40:32 crc kubenswrapper[4942]: E0218 20:40:32.046423 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:40:36 crc kubenswrapper[4942]: E0218 20:40:36.041061 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:40:39 crc kubenswrapper[4942]: E0218 20:40:39.038804 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:40:40 crc kubenswrapper[4942]: E0218 20:40:40.039698 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:40:45 crc kubenswrapper[4942]: E0218 20:40:45.040318 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:40:46 crc kubenswrapper[4942]: E0218 20:40:46.039999 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938"